Driver to drive?

Something to be aware of...

http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco
 
[groups elided]

On 11/18/2016 1:36 AM, gray_wolf wrote:
Something to be aware of...

http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco

Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.

To be clear: I don't need a recording of every word that I may ever need
to utter. Rather, a recording from which I can extract every "sound-pair"
(diphone) that MIGHT occur in a (new!) word.

In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)

Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the start
of a NEW sound)

Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!

Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")

[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I don't
have control over what is being said in those samples -- nor HOW it is being
said!]
 
On Thursday, November 17, 2016 at 9:00:59 PM UTC-5, Jasen Betts wrote:
On 2016-11-17, Joerg Niggemeyer <joerg.niggemeyer@nucon.de> wrote:
In message <gq2q2cdt1tjabhvotorscgucq7tpkh8lbt@4ax.com
krw <krw@somewhere.com> wrote:


In the Arizona sun, the case won't get hotter than 80C? I think
you're wrong. There is a reason that most automotive electronics is
specified to 150C.

Sorry for the misunderstanding my English is not the best ;-)
I ment it as a comparisson. The requirement for automotive LED is
working up to 125°C ambient or temp of the cooling block.

Inside home applications doing some retrofits under a wooden ceiling
I would try to keep the outer lamp housing temp << 100°C.

Why? Wood is safe to over 200°C, and incandescent lamps are typically
hotter than that.

To protect the LED?

Cheers,
James Arthur, member, ASPCE
(American Society for the Prevention of Cruelty to Electronics ;)
 
On Friday, November 18, 2016 at 12:42:53 PM UTC-5, dagmarg...@yahoo.com wrote:
On Thursday, November 17, 2016 at 9:00:59 PM UTC-5, Jasen Betts wrote:
On 2016-11-17, Joerg Niggemeyer <joerg.niggemeyer@nucon.de> wrote:
In message <gq2q2cdt1tjabhvotorscgucq7tpkh8lbt@4ax.com
krw <krw@somewhere.com> wrote:


In the Arizona sun, the case won't get hotter than 80C? I think
you're wrong. There is a reason that most automotive electronics is
specified to 150C.

Sorry for the misunderstanding my English is not the best ;-)
I ment it as a comparisson. The requirement for automotive LED is
working up to 125°C ambient or temp of the cooling block.

Inside home applications doing some retrofits under a wooden ceiling
I would try to keep the outer lamp housing temp << 100°C.

Why? Wood is safe to over 200°C, and incandescent lamps are typically
hotter than that.

To protect the LED?

Cheers,
James Arthur, member, ASPCE
(American Society for the Prevention of Cruelty to Electronics ;)

Oh boy, I might be in trouble. I was running this
heater circuit with a 50 ohm resistor in a to-220..
Got so hot the tacked on wire desoldered itself twice.
Kind of a built in fuse...
Third time the resistor failed open... ~2kohm

There was no smoke though, maybe only a misdemeanor. :^)

George H.
 
On 11/18/2016 2:36 AM, gray_wolf wrote:
Something to be aware of...

http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco

Too late..Johnathon Ball has been doing that for years.

--
It is hardly too strong to say that the Constitution was made to guard
the people against the dangers of good intentions. There are men in all
ages who mean to govern well, but *They mean to govern*. They promise to
be good masters, *but they mean to be masters*. Daniel Webster
 
Don Y wrote:
Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.

To be clear: I don't need a recording of every word that I may ever need
to utter. Rather, a recording from which I can extract every "sound-pair"
(diphone) that MIGHT occur in a (new!) word.

In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)

Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the start
of a NEW sound)

Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!

Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")

[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I don't
have control over what is being said in those samples -- nor HOW it is
being
said!]

'Cool Edit' has been around since at least 2002, from before Adobe
bought it.


--
Never piss off an Engineer!

They don't get mad.

They don't get even.

They go for over unity! ;-)
 
On 11/20/2016 5:27 AM, Michael A. Terrell wrote:
Don Y wrote:

Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.

To be clear: I don't need a recording of every word that I may ever need
to utter. Rather, a recording from which I can extract every "sound-pair"
(diphone) that MIGHT occur in a (new!) word.

In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)

Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the start
of a NEW sound)

Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!

Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")

[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I don't
have control over what is being said in those samples -- nor HOW it is
being
said!]

'Cool Edit' has been around since at least 2002, from before Adobe bought it.

This isn't splicing sounds (spoken words) into an audio stream but,
rather, synthesizing words in a particular (person's) voice -- without
having any sample of that word spoken by that person.

E.g., to hear *your* name spoken AS IF by that person, despite the
fact that they may never have pronounced it, previously:
"Hello, Mr. Terrell."

In my application, for folks who've lost the ability to speak -- or who
know they are about to (think: ALS, throat cancer, etc.).

It appears Adobe is trying to automate much of the tedious work
(that *I* would currently have to do manually)
 
Don Y wrote:
On 11/20/2016 5:27 AM, Michael A. Terrell wrote:
Don Y wrote:

Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.

To be clear: I don't need a recording of every word that I may ever
need
to utter. Rather, a recording from which I can extract every
"sound-pair"
(diphone) that MIGHT occur in a (new!) word.

In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H
sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)

Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the
start
of a NEW sound)

Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!

Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")

[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I
don't
have control over what is being said in those samples -- nor HOW it is
being
said!]

'Cool Edit' has been around since at least 2002, from before Adobe
bought it.

This isn't splicing sounds (spoken words) into an audio stream but,
rather, synthesizing words in a particular (person's) voice -- without
having any sample of that word spoken by that person.

E.g., to hear *your* name spoken AS IF by that person, despite the
fact that they may never have pronounced it, previously:
"Hello, Mr. Terrell."

In my application, for folks who've lost the ability to speak -- or who
know they are about to (think: ALS, throat cancer, etc.).

It appears Adobe is trying to automate much of the tedious work
(that *I* would currently have to do manually)

Cool Edit was capable of a lot more than simple cut & paste editing.

--
Never piss off an Engineer!

They don't get mad.

They don't get even.

They go for over unity! ;-)
 
On 11/20/2016 7:43 AM, Michael A. Terrell wrote:
Don Y wrote:
On 11/20/2016 5:27 AM, Michael A. Terrell wrote:
Don Y wrote:

Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.

To be clear: I don't need a recording of every word that I may ever
need
to utter. Rather, a recording from which I can extract every
"sound-pair"
(diphone) that MIGHT occur in a (new!) word.

In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H
sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)

Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice points
are more similar to each other (because the middle of a particular sound
lines up better with the middle of that same sound, followed by the
start
of a NEW sound)

Key to this working is having a large enough "unit inventory" to choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!

Also, you need a sample set that is roughly spoken in the same "manner"
so you're not trying to piece together sounds having varying degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")

[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I
don't
have control over what is being said in those samples -- nor HOW it is
being
said!]

'Cool Edit' has been around since at least 2002, from before Adobe
bought it.

This isn't splicing sounds (spoken words) into an audio stream but,
rather, synthesizing words in a particular (person's) voice -- without
having any sample of that word spoken by that person.

E.g., to hear *your* name spoken AS IF by that person, despite the
fact that they may never have pronounced it, previously:
"Hello, Mr. Terrell."

In my application, for folks who've lost the ability to speak -- or who
know they are about to (think: ALS, throat cancer, etc.).

It appears Adobe is trying to automate much of the tedious work
(that *I* would currently have to do manually)

Cool Edit was capable of a lot more than simple cut & paste editing.

I have Audition (CoolEdit's new name). I've never encountered a place
where I can type in "Hello, Mr Terrell" and expect to HEAR it speak that
in the voice of a person that *I* select (from a CHARACTERIZATION of
that voice).

It will let me synchronize a *recording* that I might have of that person
*saying* "Hello, Mr Terrell" to another person's similar utterance.
But, it won't create the words out of thin air (e.g., after that person
is unable to speak to create new recordings)
 
Don Y wrote:
On 11/20/2016 7:43 AM, Michael A. Terrell wrote:
Don Y wrote:
On 11/20/2016 5:27 AM, Michael A. Terrell wrote:
Don Y wrote:

Old news. I allow users to "design" the voice used in one of my
speech synthesizers by feeding it some prerecorded speech. If
the speech sample covers enough of the diphone inventory, I extract
the appropriate diphones *from* the speech sample and, later,
use them to piece together the words that I "need" to speak.

To be clear: I don't need a recording of every word that I may ever
need
to utter. Rather, a recording from which I can extract every
"sound-pair"
(diphone) that MIGHT occur in a (new!) word.

In a crude sense, a new word is pieced together by gluing those
sound samples together and then smoothing the transitions at
the seems, algorithmically. So, to utter the word, "Hello",
you find the diphones from your inventory that cover
" H" (i.e., end of a period of silence followed by the start of an H
sound)
"HE" (end of an H sound followed by the start of an E sound)
"EL"
"LL"
"LO"
"O " (end of an O sound followed by the start of a period of silence)

Cutting the sounds up in this way (instead of at the beginning/end of
individual sounds -- like "H", "E", "L", "O") ensures the splice
points
are more similar to each other (because the middle of a particular
sound
lines up better with the middle of that same sound, followed by the
start
of a NEW sound)

Key to this working is having a large enough "unit inventory" to
choose
from. I.e., if you don't have a sample of a word beginning with an
'H' sound, then the " H" diphone isn't available for you to use to
piece together into new words -- even though "HE" *might* be present!

Also, you need a sample set that is roughly spoken in the same
"manner"
so you're not trying to piece together sounds having varying
degrees of
inflection (e.g., consider the manner in which "that" is uttered in
"How did you do /that/?" and "/That/ is the one that I wanted!")

[I've been gathering sound clips from particular movies/TV shows so
I can synthesize certain classic "voices". This is a lot harder as I
don't
have control over what is being said in those samples -- nor HOW it is
being
said!]

'Cool Edit' has been around since at least 2002, from before Adobe
bought it.

This isn't splicing sounds (spoken words) into an audio stream but,
rather, synthesizing words in a particular (person's) voice -- without
having any sample of that word spoken by that person.

E.g., to hear *your* name spoken AS IF by that person, despite the
fact that they may never have pronounced it, previously:
"Hello, Mr. Terrell."

In my application, for folks who've lost the ability to speak -- or who
know they are about to (think: ALS, throat cancer, etc.).

It appears Adobe is trying to automate much of the tedious work
(that *I* would currently have to do manually)

Cool Edit was capable of a lot more than simple cut & paste editing.

I have Audition (CoolEdit's new name). I've never encountered a place
where I can type in "Hello, Mr Terrell" and expect to HEAR it speak that
in the voice of a person that *I* select (from a CHARACTERIZATION of
that voice).

It will let me synchronize a *recording* that I might have of that person
*saying* "Hello, Mr Terrell" to another person's similar utterance.
But, it won't create the words out of thin air (e.g., after that person
is unable to speak to create new recordings)

A lot of radio stations used it in production studios to create
whatever they wanted. They used it to build individual sounds, then used
those to make it talk or even sing.

Stations mostly used it to remove noise from old records, of the
hiss from old audio tape recordings.


There was a program for the Commodore 64, back in the early '80s
called SAM, 'Software Automatic Mouth' that allowed you to type in words
for it to speak. You could adjust the pitch, and the tempo to make it
sound fairly good. When you consider that it was running on a 6502, and
the MOS Technology 6581 sound chip, it was interesting.

Other versions needed extra hardware, like the Apple II or Atari.


https://en.wikipedia.org/wiki/Software_Automatic_Mouth

--
Never piss off an Engineer!

They don't get mad.

They don't get even.

They go for over unity! ;-)
 
On 11/21/2016 12:45 AM, Michael A. Terrell wrote:

[Cool Edit]

A lot of radio stations used it in production studios to create whatever
they wanted. They used it to build individual sounds, then used those to make
it talk or even sing.

Stations mostly used it to remove noise from old records, of the hiss from
old audio tape recordings.

There was a program for the Commodore 64, back in the early '80s called SAM,
'Software Automatic Mouth' that allowed you to type in words for it to speak.
You could adjust the pitch, and the tempo to make it sound fairly good. When
you consider that it was running on a 6502, and the MOS Technology 6581 sound
chip, it was interesting.

Other versions needed extra hardware, like the Apple II or Atari.

The mid 70's through mid 80's were, perhaps, the hayday of speech synthesis.
Processors were becomig cheap enough that the idea of an *appliance* that
could perform the function was a real possibility.

Gagnon created his Votrax (embodied in a set of PCB's, initially)
which led to more integrated (though constrained) variants like the
Artic chip. National had their Digitalker series. TI was pushing
LPC (and actually probably sold more synthesizers than anyone else
given the Speak 'n' Spell's success).

At the same time, pure software implementations became feasible
(KlattTalk/MITalk/DECtalk) -- with more robust TTS rulesets.

All were reasonably *low* resource solutions -- as resources were
still costly!

But, shortly thereafter, the idea of using more capable processors
(more complex synthesis algorithms) and bigger memories (e.g., unit
inventories, pronunciation dictionaries, etc.) seemed to push the
"leaner" solutions off to the side. The idea of storing the
pronunciations (grapheme to phoneme conversions) of hundreds of
thousands of words was met with a <shrug>: "Memory is cheap".

The appeal of diphone synthesis is that it allows you to more
realistically model the speech of a particular human being. So,
you can make a synthesizer that "talks like" that person. In
theory, you can also tweek the control parameters of (e.g.) a
formant-based synthesizer -- but, it's like tuning a piano
vs. a trombone (many individual adjustments vs. *one*).

[FWIW, there are characterizations of several "real people"
presented in Klatt's research data -- himself (and daughter!)
included in that set. No doubt, the availability of his own
"speech organs" to provide data that he could analyze on-the-spot]

With a synthesizer that can talk with the voice of another human,
there are lots of interesting possibilities! E.g., bolt a
speech *recognizer* on the front-end so a user can speak in his
own voice and have it *heard* as the voice of another individual!

[Of course, this assumes the same TTS rules apply to each individual.
It wouldn't account for speech mannerisms and affectations associated
with the target speaker. So, wouldn't fool someone intimately familiar
with that person's voice.]

It will be interesting to see how far Adobe can come towards making
this a "mindless" activity -- without requiring lots of user expertise to
manually label the sample dataset, etc.
 
On 11/21/2016 05:52 PM, Don Y wrote:
On 11/21/2016 12:45 AM, Michael A. Terrell wrote:

[Cool Edit]

A lot of radio stations used it in production studios to create whatever
they wanted. They used it to build individual sounds, then used those
to make
it talk or even sing.

Stations mostly used it to remove noise from old records, of the hiss
from
old audio tape recordings.

There was a program for the Commodore 64, back in the early '80s
called SAM,
'Software Automatic Mouth' that allowed you to type in words for it to
speak.
You could adjust the pitch, and the tempo to make it sound fairly
good. When
you consider that it was running on a 6502, and the MOS Technology
6581 sound
chip, it was interesting.

Other versions needed extra hardware, like the Apple II or Atari.

The mid 70's through mid 80's were, perhaps, the hayday of speech
synthesis.
Processors were becomig cheap enough that the idea of an *appliance* that
could perform the function was a real possibility.

Gagnon created his Votrax (embodied in a set of PCB's, initially)
which led to more integrated (though constrained) variants like the
Artic chip. National had their Digitalker series. TI was pushing
LPC (and actually probably sold more synthesizers than anyone else
given the Speak 'n' Spell's success).

At the same time, pure software implementations became feasible
(KlattTalk/MITalk/DECtalk) -- with more robust TTS rulesets.

All were reasonably *low* resource solutions -- as resources were
still costly!

But, shortly thereafter, the idea of using more capable processors
(more complex synthesis algorithms) and bigger memories (e.g., unit
inventories, pronunciation dictionaries, etc.) seemed to push the
"leaner" solutions off to the side. The idea of storing the
pronunciations (grapheme to phoneme conversions) of hundreds of
thousands of words was met with a <shrug>: "Memory is cheap".

The appeal of diphone synthesis is that it allows you to more
realistically model the speech of a particular human being. So,
you can make a synthesizer that "talks like" that person. In
theory, you can also tweek the control parameters of (e.g.) a
formant-based synthesizer -- but, it's like tuning a piano
vs. a trombone (many individual adjustments vs. *one*).

[FWIW, there are characterizations of several "real people"
presented in Klatt's research data -- himself (and daughter!)
included in that set. No doubt, the availability of his own
"speech organs" to provide data that he could analyze on-the-spot]

With a synthesizer that can talk with the voice of another human,
there are lots of interesting possibilities! E.g., bolt a
speech *recognizer* on the front-end so a user can speak in his
own voice and have it *heard* as the voice of another individual!

Worked for Blofeld.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 11/22/2016 8:41 AM, Phil Hobbs wrote:
With a synthesizer that can talk with the voice of another human,
there are lots of interesting possibilities! E.g., bolt a
speech *recognizer* on the front-end so a user can speak in his
own voice and have it *heard* as the voice of another individual!

Worked for Blofeld.

Amusingly, the character who played ESB (in Diamonds) had previously played
a British operative in _You Only Live Twice_. I guess the producers figured
viewers had short memories...

[Also amusing to consider Woody Allen's role as "Jimmy Bond" in the *first*
remake of Casino Royale (following the Peter-Lorre-as-Le-Chiffre version)]
 
On 11/20/2016 6:27 AM, Michael A. Terrell wrote:

'Cool Edit' has been around since at least 2002, from before Adobe bought it.

IIRC I had a copy of the share-ware version in 1996.
I wish they had kept the Scientific Filters in the later versions.
 
On 12/26/2016 09:14 PM, John Cohen wrote:
The vacuum tubes are 26HU5 and 36LW6.

Both vacuum tubes have an internal connection between the cathode &
screen.

I noticed Laser engravers on Ebay and the heads are sold separately.

I would like to use these tubes in Grounded Grid configuration.
Vacuum tubes operated in this manner the screen is grounded and the
cathode is excited.


Cannot do that with these tubes due to this visible from the outside
of the tube

internal connection. The wire connection internally is very thin.
Maybe a low

power Laser might just cut it.

Lasers are used to engrave metal. It seems one of these Laser heads
might do it.


Any suggestions help appreciated.

Best John

I cross-posted this to sci.electronics.design because there are some old
time tube guys there.

Doing that with a laser will be difficult and dangerous for an amateur,
and then you still have the problem of how to ground the screen, which
would then be floating with no external connection, right?

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 27.12.16 16:53, Phil Hobbs wrote:
On 12/26/2016 09:14 PM, John Cohen wrote:
The vacuum tubes are 26HU5 and 36LW6.

Both vacuum tubes have an internal connection between the cathode &
screen.

I noticed Laser engravers on Ebay and the heads are sold separately.

I would like to use these tubes in Grounded Grid configuration.
Vacuum tubes operated in this manner the screen is grounded and the
cathode is excited.


Cannot do that with these tubes due to this visible from the outside
of the tube

internal connection. The wire connection internally is very thin.
Maybe a low

power Laser might just cut it.

Lasers are used to engrave metal. It seems one of these Laser heads
might do it.


Any suggestions help appreciated.

Best John


I cross-posted this to sci.electronics.design because there are some old
time tube guys there.

Doing that with a laser will be difficult and dangerous for an amateur,
and then you still have the problem of how to ground the screen, which
would then be floating with no external connection, right?

Cheers

Phil Hobbs

The grid connected to the cathode is not the screen grid, but the
suppressor. It is not a good idea to cut it off, as the tube may
then self-destruct with secondary emission fro the screen grid.

Your tube is *not* suited for grounded-grid operation. There are
tetrodes which are used in grounded-grid configuration, with either
the normal voltage on screen or screen and control grid tied together.

--

-TV
 
1. That's suppressor (as Tauno noted), or even more accurately, beam forming
plates.

The transconductance to the suppressor is approximately zero. It's only of
significance when it comes to capacitance. (Which, obviously, is quite
important for GG operation at radio frequencies!)

2. I doubt you're going to get /anything/ of advantage out of either of
those tubes, by going GG. Even if the suppressor feedback weren't a burden.

That's the whole point of having a tetrode (or beam tetrode, or pentode)!

3. If you're operating at such high frequencies that you can't go with GK,
you're at the same frequencies where inductance screws you over. And these
sweeps don't even have multiple pin connections -- like most novar and
compactrons do (often, "undocumented" pins; find out by inspection). You're
barking up the wrong tree, forcing these tubes into this service. :-(

4. For tubes that should work higher, look for the short, late model sweeps
like 17GV5 and whatnot (B&W TV deflection). The electrode structure is
physically short, while the leads are the same length and size as any other
type; therefore the self-resonant frequency will be higher. Because, hey,
if you're trying to force >50MHz out of the poor things, every little bit
helps.

And they're cheaper, so you can use more of them, in parallel (or in any
other combination, if you like..), to get the same total plate dissipation.

5. As for actually burning up connections? Geez... As the other Tim W.
mentioned, CO2, umm, "ain't gonna cut it". You could maybe use a SS diode
laser in the near-IR or red spectrum (mind, the glass may be opaque in NIR
too), but you still have to solve for refraction due to the glass envelope,
which is a cylinder for one thing, but usually a bit wavy or lumpy besides.

Worst of all, vaporizing the connection will do two things:
a. Huge release of trapped gas. They bake them out during manufacture, but
I doubt they're 100% gas-free. They don't get them meltingly-hot. (Fair
point: if nickel does desorb fully at red or orange hot, then this won't be
a problem after all!)
b. That metal has to go somewhere. If you go carefully and melt it down,
you can get it to blob up, which will be better than vaporizing it outright.

Also, note that, in a vacuum, there's nothing to clear away melted gunk. It
won't magically *burn* a clean hole through the metal -- real laser cutters
use compressed air to blow away, and burn -- combust -- the heated material!

Probably, some evaporation will still occur, whether due to thin webs
getting superheated, or simple evaporation off the surface of the blobs.
(Checking, it seems nickel's vapor pressure, at the melting point, is
~fractional pascals, which is quite a lot higher than the pressure in a
normal vacuum tube. So it will evaporate noticeably.)

Any evaporated metal will deposit, well, pretty much anywhere it can,
line-of-sight. It'll go on the mica insulator (don't put the poor tube into
a TV receiver again!), it'll go on the envelope (maybe breaking it, because
that will absorb laser power?), it'll go on the grid and cathode (through
any holes nearby), contaminating them...

The surest way to proceed, would be to do it under maintained vacuum. Weld
an exhaust tube onto the glass nipple and pump it down. Ah, but at this
point, you might as well cut the whole thing open, go in there with diagonal
cutters and cut the damned thing by hand (sorry it's not as cool as Frikken
Lazers ;-) ), and seal it back up and re-pump (and re-getter--oh right,
you'll need to replace the getter too, and flash it later).

But at least if you're going to that trouble, you can put an actually-useful
base on it, like a compactron, and add more leads to the critical
electrodes.

Reworking a sweep tube... well... that'd be one hell of a *hack*, that's for
sure. :D

Tim

--
Seven Transistor Labs, LLC
Electrical Engineering Consultation and Contract Design
Website: http://seventransistorlabs.com


"Phil Hobbs" <pcdhSpamMeSenseless@electrooptical.net> wrote in message
news:kYudnSdj1_YYHf_FnZ2dnUU7-T_NnZ2d@supernews.com...
On 12/26/2016 09:14 PM, John Cohen wrote:
The vacuum tubes are 26HU5 and 36LW6.

Both vacuum tubes have an internal connection between the cathode &
screen.

I noticed Laser engravers on Ebay and the heads are sold separately.

I would like to use these tubes in Grounded Grid configuration.
Vacuum tubes operated in this manner the screen is grounded and the
cathode is excited.


Cannot do that with these tubes due to this visible from the outside
of the tube

internal connection. The wire connection internally is very thin.
Maybe a low

power Laser might just cut it.

Lasers are used to engrave metal. It seems one of these Laser heads
might do it.


Any suggestions help appreciated.

Best John


I cross-posted this to sci.electronics.design because there are some old
time tube guys there.

Doing that with a laser will be difficult and dangerous for an amateur,
and then you still have the problem of how to ground the screen, which
would then be floating with no external connection, right?

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On Tue, 27 Dec 2016 09:53:56 -0500, Phil Hobbs wrote:

On 12/26/2016 09:14 PM, John Cohen wrote:
The vacuum tubes are 26HU5 and 36LW6.

Both vacuum tubes have an internal connection between the cathode &
screen.

I noticed Laser engravers on Ebay and the heads are sold separately.

I would like to use these tubes in Grounded Grid configuration. Vacuum
tubes operated in this manner the screen is grounded and the cathode is
excited.


Cannot do that with these tubes due to this visible from the outside of
the tube

internal connection. The wire connection internally is very thin. Maybe
a low

power Laser might just cut it.

Lasers are used to engrave metal. It seems one of these Laser heads
might do it.


Any suggestions help appreciated.

Best John


I cross-posted this to sci.electronics.design because there are some old
time tube guys there.

Doing that with a laser will be difficult and dangerous for an amateur,
and then you still have the problem of how to ground the screen, which
would then be floating with no external connection, right?

First, assuming for the moment that the data sheet shows the cathode/
screen coming out to more than one pin, and that the data sheet indicates
that the connection between cathode and screen is "breakable" into two
independent circuits, you cannot assume that the real tube is constructed
that way -- even if you had a magic way of breaking a wire inside the
tube, there's no guarantee that you'd be able to get the connections you
want.

Second, by and far away the most common laser for cutting is the CO2
laser, and I'm pretty sure that glass is opaque at CO2 laser wavelengths
-- this means that not only could you not get through to the interior, if
you tried you'd cut the envelope.

Even if you _could_ find a way to cut things inside the envelope (using
lasers and methods that would be exceedingly dangerous to your vision,
and that of anyone around you), you'd vaporise a part of the wire, and
the stuff that came off would plate itself on to anything nearby. You
may end up with a viable tube in the end, but I suspect not.

You'd be in for much less work and expense if you just find a suitable
tube for your experiments, and buy some.

--
Tim Wescott
Control systems, embedded software and circuit design
I'm looking for work! See my website if you're interested
http://www.wescottdesign.com
 
YOU would need a Q-switched ND:Yag laser to even try this. A Carbon Dioxide laser would just crack the glass from adsorption. Three mechanical things are going to happen when you try this.


1. The shorting wire IS going to slowy vaporize and fill the tube with a fine black metal powder.

2. The wire your cutting will outgas.

3. The Dumet or Alloy 51 wire that passes thru the glass will heat up to red hot and crack the glass.

I've used lasers to heat objects in vacuum before. I know of what I speak.

Steve
 

Welcome to EDABoard.com

Sponsor

Back
Top