G
gray_wolf
Guest
Something to be aware of...
http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco
http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Something to be aware of...
http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco
On 2016-11-17, Joerg Niggemeyer <joerg.niggemeyer@nucon.de> wrote:
In message <gq2q2cdt1tjabhvotorscgucq7tpkh8lbt@4ax.com
krw <krw@somewhere.com> wrote:
In the Arizona sun, the case won't get hotter than 80C? I think
you're wrong. There is a reason that most automotive electronics is
specified to 150C.
Sorry for the misunderstanding my English is not the best ;-)
I ment it as a comparisson. The requirement for automotive LED is
working up to 125°C ambient or temp of the cooling block.
Inside home applications doing some retrofits under a wooden ceiling
I would try to keep the outer lamp housing temp << 100°C.
Why? Wood is safe to over 200°C, and incandescent lamps are typically
hotter than that.
On Thursday, November 17, 2016 at 9:00:59 PM UTC-5, Jasen Betts wrote:
On 2016-11-17, Joerg Niggemeyer <joerg.niggemeyer@nucon.de> wrote:
In message <gq2q2cdt1tjabhvotorscgucq7tpkh8lbt@4ax.com
krw <krw@somewhere.com> wrote:
In the Arizona sun, the case won't get hotter than 80C? I think
you're wrong. There is a reason that most automotive electronics is
specified to 150C.
Sorry for the misunderstanding my English is not the best ;-)
I ment it as a comparisson. The requirement for automotive LED is
working up to 125°C ambient or temp of the cooling block.
Inside home applications doing some retrofits under a wooden ceiling
I would try to keep the outer lamp housing temp << 100°C.
Why? Wood is safe to over 200°C, and incandescent lamps are typically
hotter than that.
To protect the LED?
Cheers,
James Arthur, member, ASPCE
(American Society for the Prevention of Cruelty to Electronics![]()
Something to be aware of...
http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco
Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.
To be clear: I don't need a recording of every word that I may ever need
to utter. Rather, a recording from which I can extract every "sound-pair"
(diphone) that MIGHT occur in a (new!) word.
In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)
Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the start
of a NEW sound)
Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!
Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")
[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I don't
have control over what is being said in those samples -- nor HOW it is
being
said!]
Don Y wrote:
Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.
To be clear: I don't need a recording of every word that I may ever need
to utter. Rather, a recording from which I can extract every "sound-pair"
(diphone) that MIGHT occur in a (new!) word.
In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)
Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the start
of a NEW sound)
Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!
Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")
[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I don't
have control over what is being said in those samples -- nor HOW it is
being
said!]
'Cool Edit' has been around since at least 2002, from before Adobe bought it.
On 11/20/2016 5:27 AM, Michael A. Terrell wrote:
Don Y wrote:
Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.
To be clear: I don't need a recording of every word that I may ever
need
to utter. Rather, a recording from which I can extract every
"sound-pair"
(diphone) that MIGHT occur in a (new!) word.
In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H
sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)
Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the
start
of a NEW sound)
Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!
Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")
[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I
don't
have control over what is being said in those samples -- nor HOW it is
being
said!]
'Cool Edit' has been around since at least 2002, from before Adobe
bought it.
This isn't splicing sounds (spoken words) into an audio stream but,
rather, synthesizing words in a particular (person's) voice -- without
having any sample of that word spoken by that person.
E.g., to hear *your* name spoken AS IF by that person, despite the
fact that they may never have pronounced it, previously:
"Hello, Mr. Terrell."
In my application, for folks who've lost the ability to speak -- or who
know they are about to (think: ALS, throat cancer, etc.).
It appears Adobe is trying to automate much of the tedious work
(that *I* would currently have to do manually)
Don Y wrote:
On 11/20/2016 5:27 AM, Michael A. Terrell wrote:
Don Y wrote:
Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.
To be clear: I don't need a recording of every word that I may ever
need
to utter. Rather, a recording from which I can extract every
"sound-pair"
(diphone) that MIGHT occur in a (new!) word.
In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H
sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)
Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the
start
of a NEW sound)
Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!
Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")
[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I
don't
have control over what is being said in those samples -- nor HOW it is
being
said!]
'Cool Edit' has been around since at least 2002, from before Adobe
bought it.
This isn't splicing sounds (spoken words) into an audio stream but,
rather, synthesizing words in a particular (person's) voice -- without
having any sample of that word spoken by that person.
E.g., to hear *your* name spoken AS IF by that person, despite the
fact that they may never have pronounced it, previously:
"Hello, Mr. Terrell."
In my application, for folks who've lost the ability to speak -- or who
know they are about to (think: ALS, throat cancer, etc.).
It appears Adobe is trying to automate much of the tedious work
(that *I* would currently have to do manually)
Cool Edit was capable of a lot more than simple cut & paste editing.
Something to be aware of...
http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco
Yawn.
On 11/20/2016 7:43 AM, Michael A. Terrell wrote:
Don Y wrote:
On 11/20/2016 5:27 AM, Michael A. Terrell wrote:
Don Y wrote:
Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.
To be clear: I don't need a recording of every word that I may ever
need
to utter. Rather, a recording from which I can extract every
"sound-pair"
(diphone) that MIGHT occur in a (new!) word.
In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H
sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)
Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice
points
are more similar to each other (because the middle of a particular
sound
lines up better with the middle of that same sound, followed by the
start
of a NEW sound)
Key to this working is having a large enough "unit inventory" to
choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!
Also, you need a sample set that is roughly spoken in the same
"manner"
so you're not trying to piece together sounds having varying
degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")
[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I
don't
have control over what is being said in those samples -- nor HOW it is
being
said!]
'Cool Edit' has been around since at least 2002, from before Adobe
bought it.
This isn't splicing sounds (spoken words) into an audio stream but,
rather, synthesizing words in a particular (person's) voice -- without
having any sample of that word spoken by that person.
E.g., to hear *your* name spoken AS IF by that person, despite the
fact that they may never have pronounced it, previously:
"Hello, Mr. Terrell."
In my application, for folks who've lost the ability to speak -- or who
know they are about to (think: ALS, throat cancer, etc.).
It appears Adobe is trying to automate much of the tedious work
(that *I* would currently have to do manually)
Cool Edit was capable of a lot more than simple cut & paste editing.
I have Audition (CoolEdit's new name). I've never encountered a place
where I can type in "Hello, Mr Terrell" and expect to HEAR it speak that
in the voice of a person that *I* select (from a CHARACTERIZATION of
that voice).
It will let me synchronize a *recording* that I might have of that person
*saying* "Hello, Mr Terrell" to another person's similar utterance.
But, it won't create the words out of thin air (e.g., after that person
is unable to speak to create new recordings)
A lot of radio stations used it in production studios to create whatever
they wanted. They used it to build individual sounds, then used those to make
it talk or even sing.
Stations mostly used it to remove noise from old records, of the hiss from
old audio tape recordings.
There was a program for the Commodore 64, back in the early '80s called SAM,
'Software Automatic Mouth' that allowed you to type in words for it to speak.
You could adjust the pitch, and the tempo to make it sound fairly good. When
you consider that it was running on a 6502, and the MOS Technology 6581 sound
chip, it was interesting.
Other versions needed extra hardware, like the Apple II or Atari.
On 11/21/2016 12:45 AM, Michael A. Terrell wrote:
[Cool Edit]
A lot of radio stations used it in production studios to create whatever
they wanted. They used it to build individual sounds, then used those
to make
it talk or even sing.
Stations mostly used it to remove noise from old records, of the hiss
from
old audio tape recordings.
There was a program for the Commodore 64, back in the early '80s
called SAM,
'Software Automatic Mouth' that allowed you to type in words for it to
speak.
You could adjust the pitch, and the tempo to make it sound fairly
good. When
you consider that it was running on a 6502, and the MOS Technology
6581 sound
chip, it was interesting.
Other versions needed extra hardware, like the Apple II or Atari.
The mid 70's through mid 80's were, perhaps, the hayday of speech
synthesis.
Processors were becomig cheap enough that the idea of an *appliance* that
could perform the function was a real possibility.
Gagnon created his Votrax (embodied in a set of PCB's, initially)
which led to more integrated (though constrained) variants like the
Artic chip. National had their Digitalker series. TI was pushing
LPC (and actually probably sold more synthesizers than anyone else
given the Speak 'n' Spell's success).
At the same time, pure software implementations became feasible
(KlattTalk/MITalk/DECtalk) -- with more robust TTS rulesets.
All were reasonably *low* resource solutions -- as resources were
still costly!
But, shortly thereafter, the idea of using more capable processors
(more complex synthesis algorithms) and bigger memories (e.g., unit
inventories, pronunciation dictionaries, etc.) seemed to push the
"leaner" solutions off to the side. The idea of storing the
pronunciations (grapheme to phoneme conversions) of hundreds of
thousands of words was met with a <shrug>: "Memory is cheap".
The appeal of diphone synthesis is that it allows you to more
realistically model the speech of a particular human being. So,
you can make a synthesizer that "talks like" that person. In
theory, you can also tweek the control parameters of (e.g.) a
formant-based synthesizer -- but, it's like tuning a piano
vs. a trombone (many individual adjustments vs. *one*).
[FWIW, there are characterizations of several "real people"
presented in Klatt's research data -- himself (and daughter!)
included in that set. No doubt, the availability of his own
"speech organs" to provide data that he could analyze on-the-spot]
With a synthesizer that can talk with the voice of another human,
there are lots of interesting possibilities! E.g., bolt a
speech *recognizer* on the front-end so a user can speak in his
own voice and have it *heard* as the voice of another individual!
With a synthesizer that can talk with the voice of another human,
there are lots of interesting possibilities! E.g., bolt a
speech *recognizer* on the front-end so a user can speak in his
own voice and have it *heard* as the voice of another individual!
Worked for Blofeld.
'Cool Edit' has been around since at least 2002, from before Adobe bought it.
The vacuum tubes are 26HU5 and 36LW6.
Both vacuum tubes have an internal connection between the cathode &
screen.
I noticed Laser engravers on Ebay and the heads are sold separately.
I would like to use these tubes in Grounded Grid configuration.
Vacuum tubes operated in this manner the screen is grounded and the
cathode is excited.
Cannot do that with these tubes due to this visible from the outside
of the tube
internal connection. The wire connection internally is very thin.
Maybe a low
power Laser might just cut it.
Lasers are used to engrave metal. It seems one of these Laser heads
might do it.
Any suggestions help appreciated.
Best John
On 12/26/2016 09:14 PM, John Cohen wrote:
The vacuum tubes are 26HU5 and 36LW6.
Both vacuum tubes have an internal connection between the cathode &
screen.
I noticed Laser engravers on Ebay and the heads are sold separately.
I would like to use these tubes in Grounded Grid configuration.
Vacuum tubes operated in this manner the screen is grounded and the
cathode is excited.
Cannot do that with these tubes due to this visible from the outside
of the tube
internal connection. The wire connection internally is very thin.
Maybe a low
power Laser might just cut it.
Lasers are used to engrave metal. It seems one of these Laser heads
might do it.
Any suggestions help appreciated.
Best John
I cross-posted this to sci.electronics.design because there are some old
time tube guys there.
Doing that with a laser will be difficult and dangerous for an amateur,
and then you still have the problem of how to ground the screen, which
would then be floating with no external connection, right?
Cheers
Phil Hobbs
On 12/26/2016 09:14 PM, John Cohen wrote:
The vacuum tubes are 26HU5 and 36LW6.
Both vacuum tubes have an internal connection between the cathode &
screen.
I noticed Laser engravers on Ebay and the heads are sold separately.
I would like to use these tubes in Grounded Grid configuration.
Vacuum tubes operated in this manner the screen is grounded and the
cathode is excited.
Cannot do that with these tubes due to this visible from the outside
of the tube
internal connection. The wire connection internally is very thin.
Maybe a low
power Laser might just cut it.
Lasers are used to engrave metal. It seems one of these Laser heads
might do it.
Any suggestions help appreciated.
Best John
I cross-posted this to sci.electronics.design because there are some old
time tube guys there.
Doing that with a laser will be difficult and dangerous for an amateur,
and then you still have the problem of how to ground the screen, which
would then be floating with no external connection, right?
Cheers
Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics
160 North State Road #203
Briarcliff Manor NY 10510
hobbs at electrooptical dot net
http://electrooptical.net
On 12/26/2016 09:14 PM, John Cohen wrote:
The vacuum tubes are 26HU5 and 36LW6.
Both vacuum tubes have an internal connection between the cathode &
screen.
I noticed Laser engravers on Ebay and the heads are sold separately.
I would like to use these tubes in Grounded Grid configuration. Vacuum
tubes operated in this manner the screen is grounded and the cathode is
excited.
Cannot do that with these tubes due to this visible from the outside of
the tube
internal connection. The wire connection internally is very thin. Maybe
a low
power Laser might just cut it.
Lasers are used to engrave metal. It seems one of these Laser heads
might do it.
Any suggestions help appreciated.
Best John
I cross-posted this to sci.electronics.design because there are some old
time tube guys there.
Doing that with a laser will be difficult and dangerous for an amateur,
and then you still have the problem of how to ground the screen, which
would then be floating with no external connection, right?