|
Post by 4real on Jan 19, 2012 6:48:40 GMT -5
Things like this is coming along, the gibson auto tujing guitars also can do this, just strum the guitar and it will retune...however, it does have hex capability come to think of it. Melodyne's new 'auto-tune' can separate out a harmony part and there are a few things developing from that for transcribing complex parts.
It is hard to foresee the future you are right.
|
|
|
Post by sumgai on Jan 19, 2012 12:54:56 GMT -5
4,
In point of fact, the demonstrator was using an Axon unit (he didn't say which one), and those folks are the only ones who can do split zones like you saw. (It happens that Graphtech seemingly has a thing for Axon, and would rather not mention Roland if they can help it. Dunno why...) Roland's units don't do that, mainly because the technology is still under patent.
Axon used to use, and sell under their own name, a hex pickup made by Yamaha, but that outfit got out of the guitar end of the MIDI business some time ago. (They still make other MIDI stuff.) But since that parting of the ways, way back when, Axon is ony too happy to recommend Roland hex pickups, Shadow' products (hit-or-miss, sometimes they shine, other times they fall flat on their collective face), or others as they come and go on the market.
In case anyone's forgotten, my previous axe had a Ghost system - sorry as Hell I ever sold it - the Ghost system, not the guitar. But that's what made the sale, so.....
On another topic, Axon will sell you a bare pitch-to-MIDI converter, such as ash's GI10 from Roland (or the newer GI20), or a whole box, complete with a synth module. That module used to be a Yamaha jobbie, but since I don't use their products, I can't say with any authority what kind of synth they're using now.
And I did forget to mention, above, my personal favorite instrument to synthesize - the bagpipes. Roland's GR-55 does this so well, I can't say enough good about it. Don't like the bagpipe sound? Hunch up with an Ebow, and roll out Amazing Grace - trust me, there won't be a dry eye in the house when you're done. Or use the built-in sustainer - after all, it's synthesized, not a modelled, and modified, analog sound from your strings.
Man, it's taken me years to learn how to do this stuff, too bad I don't use it anymore. Sometimes I wish.......
sumgai
|
|
|
Post by reTrEaD on Jan 19, 2012 13:44:51 GMT -5
I've not seen these, but I'll take your word for it. I was thinking of this tuner from Digitech: Hardwire HT-6I found a video on youtube about this tuner. It seems fast. I won't draw any definite conclusions, due to the possibility of sync errors between audio and video. But at first glance, it looks promising.
|
|
|
Post by ashcatlt on Jan 19, 2012 16:17:32 GMT -5
I didn't listen to the demo, and it's a bit off topic, but...
If by "zone splits" we mean having different sounds from different parts of the fretboard (as opposed to different strings) then this is a function of the synth module, not the guitar>midi converter itself. Any decent multi-timbral synth will do it. Perhaps Roland doesn't build this functionality into their "guitar synth" modules. I guess they've dropped the ball there!
You do need to have enough voices available, though, because the split is based on the note number, not necessarily the fret number. Say for example I set my GI10 to "mono" mode - where it sends all 6 strings on one midi channel - and then set up my synth so that the low E switches at the 12th fret. Now the A string will switch at the 7th fret, the D at the 2nd, and the rest will always play the higher timbre.
Instead I need to set the GI to "multi" mode so that it sends each string on a separate channel. Then I assign two sounds to each channel - one for the high sound and one for the low. So, that's twelve all day. You can have more zones (usually called key zones, since these things tend to expect input from a keyboard controller), but most modules top out at 16 individual voices. Of course, there's no reason you couldn't use multiple modules...
Most real midi modules will also do velocity switching so that you can trigger different sounds by picking harder or softer.
As I said all of these things are functions of the synth module taking the midi data and deciding what to do with it. The converter itself just sends a note on with a given number and velocity on a given channel. Of course, if your converter and module are in the same box... Most have midi outs, though, so you can bypass Roland's artificially imposed limitations.
|
|
|
Post by 4real on Jan 19, 2012 19:39:56 GMT -5
Well, it is interesting to see the developments and all. Demos can be impressive, making musical use of it, well...just not seeing the results as yet.
From a broader perspective, there is something a bit 'weird' about seeing a guitar player live or even in teh home that is making 'piano' sounds of bagpipes for some reason. It seems like a cool idea, and in teh privacy of the studio, well anything goes...but in person...hmmm.
I mean, I could do things a lot easier and just as convincingly in many ways with a solid body instrument than my half acoustic project, but I have had these negative perceptions, even from that!
I always thought Pat Metheny had an interesting and recognisable synth guitar sound, but it still does not cut it with fans compared to his signature electric sound.
I did look into things like the GR55 and was also impressed, but really it could be relegated to a coll toy in a lot of ways for practical purposes.
Still, it is out of my league financially and practically in a lot of ways. Although not at all a keyboard player, I have a midi set up with a full sized keyboard that is far cheaper, reliable and far more range than a guitar.
As for the 'fakery'...well a midi controller is just triggering things and while things are becoming quite convincing with sampling and such, I have yet to hear a synth guitar sound that is totally convincing, even if triggered from a guitar controller. Close, but...
This does say something to me, I like midi, it is a fantastic tool and for home studio stuff indispensable. For a lot of people if done well, no one would ever know and of course we hear it far more than anything else for sound tracks and the like. There is no musical reason for not using it and beats buying and learning the bagpipes.
I think the 'resistance' and midi 'backlash' is more complex than is being characterised, there is some truth in it of course but still...hmmm.
The sustainer thing is an interesting thing, I think that the sample and hold thing is obviously a lot more versitle and is what a lot of people might expect when they are thinking 'sustainer' or even ebow...but they are a completely different thing.
A driven string is quite a different thing from holding a note. I personally don't like that 'name' as it is about far more than just holding a note, that misses the whole point there. It is far more organic than that, not to mention the whole harmonic aspect that at best can be simulated.
I mean, a violin player does not think of the bow as a 'sustainer' now do they? The fascination with such a device, like a bow is that it is not something that can be modelled or synthesised and this is similar to all instruments in the synth world. It is not just a matter of overall 'timbre' but a heap of things that a good musician will get out of things. Midi tends to be reductionist and in so doing misses out on so many idiosyncrasies of real instruments and the instinctual real time details of a real player.
To get close you need to tell a midi synth or sampler so many things, huge orchestral programs can add to these things but the detail is phenomenal and so hard to do convincingly where as a good player and instrument these things are a matter of course.
That's not knocking midi at all, it is an incredibly useful tool, but I have yet to see the convincing results from it that others can get with a guitar and amp. All I ever see are these kinds of 'demos' I have to say...but perhaps you can point me in a direction that expands those perceptions.
To me, it is very similar to music notation, it is a representation of the music and never lives up to the intricacies of timing and such of the real thing. Midi has exactly the same limitations in many respects. It would be great though to have the capacity to notate things with the things though. Youcan enter such things in score or guitar pro, but can you really say it sounds like the 'real thing'?
|
|
|
Post by sumgai on Jan 20, 2012 2:20:58 GMT -5
I didn't listen to the demo, and it's a bit off topic, but...
If by "zone splits" we mean having different sounds from different parts of the fretboard (as opposed to different strings) then this is a function of the synth module, not the guitar>midi converter itself. Any decent multi-timbral synth will do it. In reverse order - nope, and you should've. Zone splits are able to detect where along the string you picked, and where you fretted it - the actual frequency is not a factor. But as I said, this is patented by a fellow who worked/works for Axon, and not available to other manufacturers for awhile yet. What you said about a synthesizer taking a Note (pitch, or frequency) and using that to trigger a different sound as well as the note, that's true. However, that's the synth using incoming MIDI data, not the pitch-to-MIDI converter hardware itself. Lurk in the various MIDI guitar groups on Yahoo for more details. I'll assume you mean a MIDI module to be a synthesizer, right? Then yes, what you said is true.... except that I take exception to your use of "real". What's that, the opposite of 'fake' MIDI modules? I might feel less put out if you'd said 'limited' or some such, because I have Roland units that are both ways - the GR-30 and -33 can't do this, but the newer GR-55 can. Does this mean that I spent beaucoup bucks for a fake something? Well, I'd characterize it more likely as Roland's lawyerly imposted limitations, due to fears of lawsuits and such. But then again, what do I know? HTH sumgai
|
|
|
Post by 4real on Jan 20, 2012 2:30:09 GMT -5
Here's a clip about the melodyne DNA and what this shows where things re moving in the virtual music making world...
While we often think of melodyne and auto-tune as the 'fakery' of a justin bieber and such, it is a perfectly valid way of working with sound (of course one could take these things separated out of polyphony and back to midi, but it is not necessary and in many ways it is 'going backwards') I have heard some quite impressive music making from this kind of technology.
I've always had a soft spot for 'melodysheep' and his symphony of science has been a world wide hit of sorts auto-tuning everyone...I've always liked this surfer ripped from a new broadcast myself...
I know, not for everyone, but still musically as valid as anything else.
real time though....that takes a lot of power to de-construct in melodyne, to be able to do that and output a midi with a minimum of acceptable latency....a fair way to go I suspect.
Midi guitars, well they tend to seek the player to eliminate a lot of the nuances and such that is where the magic lies not to mention that things are generally pretty limited as to how much one can control such aspects in real time with any particular sound. For a lot of people, a string pad might sound like an orchestra and for many purposes it will do, but it is not quite 'right' compared to a real instrument like a guitar well played...at lest at this point and to my ears.
I believe there are competitions that attract some very notable experts in such things to produce midi copies with any resources and time to get such details and while many come close, they never quite get 'it' for all teh technology and sample libraries about for some reason.
|
|
|
Post by sumgai on Jan 20, 2012 15:15:23 GMT -5
Pete, Part of that "not quite right" issue is the fact that a string pad, for example, isn't being reproduced by a full orchestra, which would entail lots of sound sources, it's coming only from a pair of speakers at the sides of the stage. I can agree with you, objectively, but for the greater part of any audience, they aren't going to care. However, if one really wishes to, and I've done this to prove the point, one need merely drive two synth modules that are set to the same patch, and feed them to two distinctly separate sound reinforcement systems. Locate the speakers well away from each other, and you're going to be in for a treat, I guarantee it. I see you picked up on that bit about the processing power required by Melodyne. Can't use that live on a stage just yet, sorry to say. But I foresee a time not too far into the future..... I also predict that this will bring out a helluva lot of single-member "bands", who can do it all. I'd be one such.... after all, my Indian name is "Doesn't play well with others". ;D ;D sumgai
|
|
|
Post by 4real on Jan 20, 2012 17:26:29 GMT -5
Ahhh...I am with you on that side of things to a great extent, certainly what I am trying to do now and an increasing trend in the growing environment of the home studio.
I'd love to 'play with others' but there is an increasing isolation and change in teh way music is being made perhaps as well...and it is not just that I am living on an island these days. Economically and practically there is a lot of hassle with trying to run a lot of places, drums take up space and set a level and all kinds of PA is required for that.
Still, my feeling and anecdotal experience is that audiences will be open to a guitar player playing multiple parts that sound like a guitar, far less so a guy with a guitar singing along to a backing track (and things like drum machines are perceived or effectively that too) as a kind of Karaoke.
Similar with a synth or even effects, if it is one guy and an instrument, there is a perceptual need it seems for there to be some idea that it is coming out of that instrument. Looping in some ways seems to be an exception perhaps because the sound source is the same.
It's something I've thought of a bit, it would certainly be easier to attempt to play against a backing track for instance. Playing multiple parts on one instrument, especially something as limited as a guitar is a tricky business (and in there lies the skill to pull it off) but I am not sure that having synth sounds would diminish the perception some how.
There is also something to the intimacy and complexity of a real instrument, all those nuances and such that need to be weeded out of the playing to trigger things well are often where the character is.
Being a little of a devils advocate here, there is a sense of a 'fast food' about it with things as radical as a lot of synth use...sure, most people don't really care if there is a 'string pad' as opposed to multiple sources and such, they rarely have that experience anyway. One can even make that a feature and synth strings can be extremely effective in their synthetic nature.
It's a conundrum, I've just not seen an artist come forward with a convincing synth based one man band on the guitar...there were a few in the 80's on keyboards (thinks Thomas Dolby or Howard Jones) that were getting away with such things....
I'm not saying it isn't possible, but I've not seen it with a guitar or audience acceptance perhaps...so far...perhaps the DJ movement is close...hmmm
It is an interesting thing and something that I am putting a bit of thought into at the moment though not specifically in regards to 'synths'...a part of my current project really is to create a 'surround sound' multiple source kind of experience with more textural variation though I do sense something of that backfiring...there is some intimacy of the single source of a single instrument.
Clearly I don't have a set mind on such things and certainly not 'anti-midi' or audio manipulation and triggering but it is a bit of a conundrum all the same. I imagine that it could be done effectively some how by someone...hmmm
There is a danger of 'not playing well with others' of tipping over into 'not playing well for others' that I am constantly aware off lately...it is an important self reflection.
Some interesting thoughts though, be interesting to hear how others approach this, is the 'band' dead? With so many musicians making music in a virtual world, is a live performance obsolete and if so, how does that effect the music?
|
|
|
Post by sumgai on Jan 20, 2012 23:13:22 GMT -5
Pete, Speaking of looping, and specifically on a guitar, I've included a link, below.
Your mention of skill, just before this paragraph, is really the ability to distill all the complexities down into the simplest possible production. That does not mean p get rid subtle nuances, it means instead - render those nuances such that they are not only present, but appreciable by a listener. This also goes to the YouTube link below.
I'll see your Thomas Dolby, and raise you a Johnny Wright. This oughtta take care of your one-man-guitar-band concerns, and also brings in a bit of your 'sociall acceptable' looper:
Well now you've hit the nail exactly on the head as to why I gave it all up. Others were no longer interested in what I was doing, and the ability to engage audiences, even just to gauge their reactions, was being stifled down to inexcusably non-ROI. IOW, it no longer paid to go out and even try to play. Ergo, fini.
No, the band is not dead. Not so long as their are groupies, and groupie-chasers who think that liquor and music go hand-in-hand in getting some action.....
sumgai
|
|
|
Post by cynical1 on Jan 21, 2012 0:40:39 GMT -5
I think the crux of all of this comes down to intent. We'll visit that thought again in a bit.
MIDI for guitar is certainly not new. It's been around long enough, honestly, to be farther along then it is. I think what has held it back is the same reason why Strat, Teles and Les Pauls are still around and held in such pious regard...purist dogma.
How is it that an instrument that had become synonymous with rebellion and once tagged a threat to decent society become so tightly defined and restrained? Beats the Hell out of me...probably just human nature to define and pigeon hole something to remove the threat...
Well, I'll see your Thomas Dolby and Johnny Wright, and raise you a Les Paul
Granted, it's no MIDI synth guitar, but it is one guitarist filling in multiple parts and then bringing in the rest of the band to fill out a small ensemble into something larger then itself...all in a live setting...and that sure sounded like applause to me...
Sounds like a familiar scenario. And it touches on what I believe to be a trend in music. Musicians are finally waking up to self promotion and production versus "getting signed".
I believe the traditional music production formula is coughing up blood at this point, as it so rightly deserves. With the advent of recording software and hardware for the home it is now possible to record respectable tracks without the need for a studio or record company.
As young musicians become old musician the number of players dwindles. It doesn't mean the spark dwindles. With a MIDI guitar you can use the same instrument you have been honing your skills on for decades and utilize it to add whatever the mood strikes to your songs...without having to hunt down 10 different musicians.
And if you've ever hired studio musicians they don't come for free. There's studio time and the per hour rate for the player and the engineer. So what if you want a B3, a sax or a string section...or even bagpipes...you've now got a bevy of studio talent you're paying for plus the engineer...OK, so you can get the bag pipe guy for some hagus...but you get the gist.
With a MIDI guitar you can now add any or all of these instruments, or just about anything you can program, all in the privacy of your own home studio.
Not all of us can play keyboards with the dexterity or skill we can play a guitar or bass.
And you can earn a living with that kind of flexibility...and sleep in your own bed every night.
I'm not sure about other parts of the world, but venues in the US have been vanishing for decades and not likely to ever return. Even if you can assemble enough like minded musicians to form a band the numbers of places to get paid to perform and hone your chops are just not there. This makes it unsustainable for the average musician unless he is well funded and is willing to live on the road 300 days a year. And even that is getting harder as fuel prices climb higher every year.
This is what is eroding the "let's form a band and play gigs for money" tradition, not MIDI guitars.
The virtual band is the wave of the future. It is here and it will not be going away anytime soon. As bandwidth speed improves it will allow for the creation of virtual venues for musicians from around the world to play together in real time.
Without the shackles of contractual obligations musicians will be free to collaborate with whoever they chose.
Just as early television was starving for original content specific to the medium, virtual bands have a viable future via the Internet. These virtual venues have yet to really come into their own in the Internet medium...but the market is there and it will become as common in 10 years as a $1.00 cover charge was in the 70's.
And the technology will improve and this will become a reality. I'd put money on this...if I had any...
MIDI guitars are a way to bring the guitar away from the dogged purists and give it a new voice and new life. Evolution is. This is just another mutation in the sequence.
And the music will be just fine. As long as someone is willing to play it, someone will be willing to listen.
Well, you got me there...
Happy Trails
Cynical One
|
|
|
Post by newey on Jan 21, 2012 7:46:04 GMT -5
Amen to that. The trend started way before MIDI; live venues started disappearing with Disco. Bar owners found out that people would show up to spend money to drink and dance to recorded music, and a DJ was/is cheaper than a band.
Even living in a college town during that time, I recall that, one by one, live venues went by the wayside, and morphed into "dance clubs".
And it largely works because the average person, not a musician, doesn't really have adventurous tastes; familiarity trumps originality every time.
The City of Akron puts on a series of outdoor summer concerts in a downtown park; these are free, and feature mostly local bands. Last spring, I recall looking at the schedule to see if there were any I wanted to attend. These are held weekly from June to August, so there's roughly a dozen or so.
To my dismay, every single one of these shows last summer featured a "tribute band".
Many of these bands no doubt feature talented musicians- you'd have to be to pull off a convincing "tribute" set- but I think many of them would rather be doing original stuff instead. However, they know it won't sell, so if they want to make money playing live music, they've got to crank out familiarity.
People who go out to see a band don't want to risk the unfamiliar-after all, the band might suck, and then one's hard-earned money is spent having a not-so-good time. When one can afford one night out a week, better to go with what you know rather than take a chance.
It wasn't always like this- in the early days of Rock, in the late '60s and early '70s, folks were more open to musical experimentation, and would go see a band that perhaps they knew nothing about. But that was a short-lived (and perhaps drug-fueled) phenomenon.
Neil Young said it best:
"Live musics are better."
EDIT:I don't want to sound overly pessimistic, however. I should add that, based on my weekly contact with the "college scene", things may be changing. Several new live venues have sprouted up recently, and there's a whole spate of local bands playing in them. These mostly run to the post-punk "grunge" end of things, with a dose of emo/goth thrown in; and I doubt they're getting paid much, if anything. But they're definitely original . . .
There are also a few bands pursuing the "jam band" formula who seem to be regularly gigging.
There seems to be a bit of a resurgence among the "anti-trendy" types, who wouldn't be caught dead getting all dressed up for the dance club scene. To which I say "Hallelujah!", even when their style of music may not be my cup of tea . .
|
|
|
Post by sumgai on Jan 21, 2012 13:29:13 GMT -5
Ya gotta give Les credit for chutzpah, stating that he invented the looper. I can't be certain when he started such experiments, but I do know for a fact that Chet Atkins was doing it, in the studio of course, in..... 1953. Yes, that's not a misprint - Mr. Guitar was always willing to try something, anything, just to stand out from the crowd. (But remember, back then 'the crowd' was what we'd call pretty damn sedate, by today's standards. ) But I was really getting to the 'looper' point in 4real's dissertation, not so much the MIDI. It allowed J.W. to sound like much more than he can pull off at any one moment in time, and notice - no MIDI was involved in the making of his tune(s). Although in case no one has guessed it, the Boss RP-50 has a buttload of MIDI-ized control capabilities, both and out. As one would expect from an over-$150 article from Boss/Roland. But I digress...... sumgai
|
|
|
Post by ashcatlt on Jan 21, 2012 16:15:34 GMT -5
Boy, we've waded off into a different pool here altogether, I think. I almost feel silly talking about MIDI now... In reverse order - nope, and you should've. Zone splits are able to detect where along the string you picked, and where you fretted it - the actual frequency is not a factor. But as I said, this is patented by a fellow who worked/works for Axon, and not available to other manufacturers for awhile yet. Okay, yeah. Now we've got the whole "how dey do dat?" thing going! That is cool, but still limited to interpretation at the sound module end. And yes, I was a bit disparaging. When I started getting into music I got a couple hand-me-downs from my father: an Alesis MMT-8 sequencer and HR-16 drum machine, and a Korg M3R synth module. I had the manual for the Alesis devices (they shared a manual!) and a spiral bound book titled "the M3R Bible", and I read that book as such. Even down the road I would always dive into the manual of any synth I got my hands on, just to read about all the cool features it had and the different ways I could manipulate the sounds. Key and Velocity splits are pretty basic things which should be included in any multi-timbral synth module as far as I'm concerned. As a matter of fact, for quite a while now most respectable sample-based synths have included these things internal to individual sounds or "programs" or whatever they decide to call them. It's got a little something to do with the "nuances" that pete is talking about. If you wanted to create a sample-based piano sound your first instinct might be to record a strike of one key - maybe middle C? - at some "average" velocity and then use the sampling computer to play that sample back faster when it receives midi notes higher than that note, and slower for lower notes. You would play it back louder for higher velocity numbers and quieter for lower velocities. Done. But, that won't sound particularly natural. The timbre and envelope will change in ways that don't happen on a real instrument. We are really pretty sensitive to this, and can start to tell that it sounds "fake" after a fairly small amount of this type of transposition. The harmonic envelope of a piano note is also quite different when it's struck very softly from when you really bang on the thing. So you can use key splits and velocity switching to make up for this. If you really wanted to get as close as possible to a real piano you would sample 127 strikes - getting progressively more forceful from barely touching to as hard as possible/practical - for each key on the keyboard. Then you'd have the sampler engine simply play back the sample which corresponds to the incoming note number and velocity. That takes a bit of processing power, but isn't really all that tough even for the chips we were running back in the 80's. But you've got to store all those samples somewhere. Each of these samples needs to be long enough to capture a fair representation of the full note envelope - attack, decay, sustain, and release. And you're going to need to loop that sustain portion, so the sample needs to be long enough for it to settle to a fairly steady state, and you want the loop to be fairly long in order that any periodicity in it will be too long to notice. Even if 1 second is long enough for all that we're talking...what?...88 keys x 127 velocities = 11,176 seconds = almost 190 minutes worth of samples. Mono samples at 16bit/44.1kHz run right around 5Mbyte/minute, so we need almost a Gigabyte for this one sound! My machine here could load that into RAM a few times over, and I've got a couple Tb of storage, but back in 1988 it was only real eggheads who had even dared to dream of a full Gigabyte! Even today it's not exactly cheap when one considers that this really needs to be solid-state storage to meet all the design criteria. And then you've got one great sounding piano. There are a few out there willing to pay a grand or two for a great sounding MIDI piano, but to really move any volume you're going to need more options. At the very least maybe a grand piano, an upright, maybe an electric or whatever... So let's meet in the middle somewhere. And that's what most synths do. I don't know for sure how many use velocity layers at all, but at most there might be a couplefew. But they do more key splits. It's a matter of sampling enough times across the keyboard so that you're not transposing any one sample too far. Maybe one sample per octave, or slightly more often. If you're lucky and careful you won't too much hear the switch over, but on a lot of synths it's pretty obvious. More to pete's point: Keyboard instruments like pianos (and to a somewhat lesser extent the chromatic percussions like xylophones and marimbas) are pretty easy to recreate via MIDI and sampling because there really is only one option for articulation. The hammer hits the string in exactly the same place and from exactly the same angle every time you press the key. Note number and velocity are all you really get to play with (excluding the pedals, which are fairly easily modeled). Any other type of instrument has many more ways to get a note out, all of which create much different timbres. Since we're most familiar with guitars around here we'll use that as an example. Even just with a pick you can make some completely different sounds depending on the angle of attack, or even where along the string you pick. Or you can use your fingers, or... None of these things come across in MIDI very easily. For the most part (partly because it was originally intended to be controlled from a keyboard type interface) you get note number, velocity (the "force" of the initial key strike) and aftertouch (pressure applied or released after the initial strike). There are a plethora of Continuous Controller messages available which might be used to swap samples or otherwise mimic the different articulations, but we've really only got so many appendages with which to accomplish this. Most keyboard controllers have a couple "wheels" or joystick or something to control pitch bend and modulation (vibrato), but you have to take at least one hand off the keyboard to do it. On a guitar, aftertouch is difficult if not impossible to implement, but pitch bend is possible in "normal playing position". Beyond that, though... So, that's why I needed a Bible for the M3R. There are tricks within subtractive synthesis which can help to sort of fake some of the nuances that are missing from static samples. We use the limited information which we get from the controller to control any of a number of parameters beyond what sample to play and how loud. Things like amplitude envelopes so that playing softer softens the attack of the note relative to the sustain, or filter cutoff so that louder notes are brighter with more high order harmonics, or even pitch envelopes to simulate the "overshoot" that happens when certain instruments are played aggressively. There's a lot that can be done there with some careful and thoughtful programming, but it still only really goes so far, and is still a compromise.
|
|
|
Post by sumgai on Jan 21, 2012 18:41:34 GMT -5
ash, (Boy, you look like you've been taking lessons from 4real! ;D) I'll address only a couple of points, as the same thing is happening as in the past, between the two of us - we start out looking like we're gonna come to blows, and end up on the same page, and nearly in the same paragaraph. This is a good thing. OK, let's deal with splits. On the keyboard, we can assign a split point such that two (or more) instruments sound out for each range. (And if it's a really spiffy 'board, perhaps there might be more than one split point.) But on a guitar, we have true polyphony - six sources, none of them altered or augmented in any way. Now when I say "zone split", I mean that I can pluck one or more strings in the first zone, say over the Bridge pickup, and assign that to represent Tone #1. I can assign the second zone, plucked over the Neck pickup, to represent Tone #2. It's a matter then of only assigning those two Tones to yield whatever sounds I want out of the synth box. But of course, this is all MIDI - I'm not limited to using that 'zone' information for generating different tones - perhaps I want to have Zone 1 trigger a bass drum sound, but Zone 2 might strobe a light, randomly chosen by a ring counter circuit I had previously set running. When I say "fret split" I mean that as I hold my fretting hand over a certain region of the neck, I can achieve the same results as before (providing I actually fret one or more strings, that is). and once again, I have no less that six sources to play with, all of the freely and interchangably assignable. Now add velocity to that mix, the ability to pluck softly or stridently, and I've got a myriad of possible controls at my disposal. And at no time was I required to use any of that information to generate a note from a synth - although it'd be kinda dumb not to, I agree. In the end, you're correct, it's all in how the data is interpreted/assigned/programmed/etc. and that's the responsibility of the synth module, to be sure. (Or any other non-musical controller - for instance, I can assign a certain note to start my Jam Station, and another to stop playback, and yet another to exit the loop thereon. But I digress...) But now we're down to another bowl of Cheerios..... When we want nuances related to the guitar, such as holding the pick sideways, or fingerpicking, or muffling the note(s), that's not something that MIDI understands, true enough. But does it have to? I mean, the idea, in 99% of all cases is to trigger a note from another instrument, no? Then why would one try to impose the abilities of a guitar onto the other instrument? I certainly don't, at least not knowingly. Sure, some latitude is good, and some experimenting is really good, but in the end, if I want to "impress" the audience with a sax part, I sure as hell ain't gonna muffle the strings to make it do something different, not anymore than I might bend a note only a very tiny bit. (Ever try to make a sax bend a note - tain't easy!) And in closing, I must say that everything starts out as a compromise. In the beginning, there's not much "me" in the music" and as time goes on, I add more "me". If I get to feeling frisky and start adding too much "me", one of two things are gonna happen - either I'm the next Van Halen, or I'm no longer sought by other musicians, nor employable in most senses of the word. Only a rare few get to throw compromise out the window. MIDI users are not usually found amongst them. HTH sumgai
|
|
|
Post by ashcatlt on Jan 21, 2012 21:44:50 GMT -5
A sax has different articulations and things that you might want to emulate as well - tonguing, overblowing, even biting the reed will all produce different effects, and it's not always easy to get that kind of thing via MIDI in live performance. I guess that was the point that I was trying to make, and I think the sort of thing that 4real has been talking about as a limitation.
|
|
|
Post by ashcatlt on Jan 21, 2012 22:28:45 GMT -5
So I went back to page one and it turns out I didn't actually talk about how these things fail.
There was a time not too long ago when I could not live with less than 6 MIDI ports. I had a 7 foot tall rack full of gear, half of which was MIDI modules. Then there were several actual keyboards in the studio. I didn't necessarily have to use all of these at the same time (sometimes I did!), but I needed all the ports so that everything would be available when I wanted it without having to crawl behind the rack with a flashlight in my mouth.
In the last few years, I've been working toward replacing that entire rack and a 4 foot wide mixing board with one computer and maybe 8 rack spaces worth of gear, and I'm pretty much there now.
But there are a few things which I do with hardware which can't be accomplished in the computer just yet, and they involve audio>midi conversion.
The first is the guitar>midi thing. GK-3 plus GI10 accomplish this.
Number two is my "Mr. Rogers" trick. If you've read the thread about the LT live rig, then you've heard about this. The GI10 has a 1/4 inch TS jack on the front panel labeled as "Mic In". Of course, you can plug whatever you want in there, but since it's only got the one set of conductors, it can only create a monophonic output. You can plug in a microphone and sing into it to generate MIDI notes! Or plug in a bass or something.
But if you feed it a polyphonic signal it gets a bit weird. It basically picks a note at random from the mess of harmonic input. I put the microphone somewhere around the stage area or even out in the room to pick up the entire mix, crowd noise, whatever. It actually does a pretty good job of playing semi-random accompaniment to our sound. It's a lot of fun in between songs when it plays along to my stage banter and the cheers and (more often) jeers from the crowd. Depending on settings, it'll even play along to the sounds of footsteps as people walk around. Thus the Mr. Rogers tag. It's a lot like the weird piano thing that follows him around his house.
That's all fine and good on stage, but if I want to emulate the effect on an "in-the-box" mix, I have to pump the audio out, through GI10, and then bring the MIDI back into the computer. Much better would be a VST plugin which I could strap across a bus in Reaper and route off to VSTis as necessary. Haven't found that in my price range (free or nearly so) yet.
The other thing would be drum triggers. I've got a pretty good set of Roland drum pads and an Alesis D4. The D4 has 10 inputs on the back which take audio input. In this case each input corresponds to exactly one note number. It expects to hear a very quick percussive input and when it does it measures the volume, assigns a corresponding velocity level, and spits out a quick NoteOn-NoteOff routine. I don't play the things, just keep them around in case I can convince somebody else to hit them, and that's what I do when I want "live" drums on stage. In the studio, I'll either use the drum pads on my Axiom keyboard, or (much more likely now) the GK-3.
But again, I sometimes do horrible things with the D4. As I said, the trigger inputs expect a percussive input. What happens when it hears a sustained input? Well, it works by monitoring the input and generating a note any time the signal goes above some threshold level (adjustable in the D4's software). When it gets a sustained input, which stays above that threshold level, it just keeps spitting out note events in rapid fire fashion. How often it re-triggers is somewhat limited by the internal mechanism, and adjustable by software parameters. And with careful tweaking you can get it to track dynamics fairly well!
By picking the right sound to trigger and maybe applying some effects you can get all kinds of crazy crap out of the thing. I like to use bass drums or low toms through some distortion to make it sound like there's a motorcycle or muscle car revving along with my guitar. Or use cymbals through a bunch of reverb and some more distortion to get a crazy metallic sheen around all my notes. There are other applications. Just occurred to me that I might implement the MoogerFooger Ring Modulator to use the drums to modulate the guitar...
Anyway, haven't found a good free or cheap VST option for this either. I know of Drumagog, but not sure I want to pay for it. OTOH, I think I've read that this type of thing is built into Reaper nowadays. Hmmm...
Edit -
Stop the presses! Reaper comes free with plugins that do both of these things. ReaTune does monophonic audio>midi on top of it's "autotune" applications, and the included set of stillwell plugs has one named "drumtrigger" which does just that. So, I'm gonna go play around with those for a while.
|
|
|
Post by sumgai on Jan 22, 2012 0:05:59 GMT -5
ash, In hardware? Oh, I do all that with a Warp Factory, by Electrix. Made to order, but sadly out of production. Rack-mountable, and MIDI-cognizant, too. Take your time playing with your "new" toys... we'll leave the light on for ya. sumgai
|
|
|
Post by 4real on Jan 22, 2012 21:10:15 GMT -5
A sax has different articulations and things that you might want to emulate as well - tonguing, overblowing, even biting the reed will all produce different effects, and it's not always easy to get that kind of thing via MIDI in live performance. I guess that was the point that I was trying to make, and I think the sort of thing that 4real has been talking about as a limitation. Absolutely...it's all a matter of what you ar trying to do. I am all for midi and use it a lot in recording things, it can even be used as a 'feature' and I think often good effect...not everything need be 'real'. In fact if not for cost at this time I suspect I'd be pursuing it more...it offers so much in terms of things like transcription and such and in terms of the solo guitar thing, would produce a better result to have a sampled string bass following the guitar than making do with 'pitch shifting' I am sure. It all depends on context and what one is trying to achieve perhaps. More an more though, I think the nuances are very important, it might be right that 99% wont notice, but who knows, I suspect not when in a recording where there are repeated listenings. ... The looping thing is interesting. The repeating thing is interesting in to itself and heard some great players. I'm not sure that kind of 'repeated' pattern thing is for me, but I saw this solo guitar singer a while back locally that had a fantastic subtle looping technique. I must go see him again to check out his way of doing it. He seemed to be sampling say the verse before and seamlessly have room for a solo or something. Having vocals and therefore another instrument certainly helps but in what I am hoping to achieve, the instrument carries the whole thing. Even if I were to use midi to change tones, the nuances are important. Not that many can't be had with midi at all, but there are limitations in some part. ... Still others might want different...in my looking into the GR-55 I found this for instance... I don't know if one were to play a whole show like this it would not be seen to be a 'novelty' more than anything...it is kind of the impression I got from a 'lay audience'...clever but...hmmm.... ... Reaper has so many features and weird little things in it and a lot of people working the open source nature to their own quirky advantage there...it is why I chose to use it and the midi converter from this thread works really well on it. One thing with the midi to audio thing is that rather than worry about the tracking too much in performance, one can of course apply it to anything post recording, clean things to a nice clean audio and get some great sounds out of it. In the context of other instruments and things going on and without the visual que, who is going to know what the controller was or care!
|
|