top of page
toanovinires

Learn Digital Processing of Speech Signals with Rabiner Solution Manual | Updated Hit



Until now, various signal processing models and algorithms have been used in biological sequence analysis, among which the hidden Markov models (HMMs) have been especially popular. HMMs are well-known for their effectiveness in modeling the correlations between adjacent symbols, domains, or events, and they have been extensively used in various fields, especially in speech recognition [1] and digital communication. Considering the remarkable success of HMMs in engineering, it is no surprise that a wide range of problems in biological sequence analysis have also benefited from them. For example, HMMs and their variants have been used in gene prediction [2], pairwise and multiple sequence alignment [3, 4], base-calling [5], modeling DNA sequencing errors [6], protein secondary structure prediction [7], ncRNA identification [8], RNA structural alignment [9], acceleration of RNA folding and alignment [10], fast noncoding RNA annotation [11], and many others.


Alan V. Oppenheim was born in 1937 in New York, N.Y. He received simultaneous bachelor's and master's degrees in electrical engineering from MIT in 1961, and a Sc.D. in EE in 1964, also from MIT. He took a position at MIT in the electrical engineering department in 1964, and in 1967 also had an appointment at MIT's Lincoln Laboratory. He has held various positions at both institutions since that time, and is currently Ford Professor of Engineering at MIT. His principal research interests have been in the field of digital signal processing have focused on nonlinear dynamics and chaotic signals; speech, image, and acoustic signal processing; and knowledge-based signal processing. He is a Fellow of the IEEE (1977) [fellow award for "contributions to digital signal processing and speech communications], and the recipient of the ASSP Technical Achievement Award (1977), the ASSP Society Award (1980), the Centennial Medal, and the Education Medal (1988). Oppenheim is recipient of five patents, author or co-author of over 50 journal articles, and author or co-author of six major engineering texts including Teacher's Guide to Introductory Network Theory (with R. Alter, 1965); Digital Signal Processing (with R. W. Schafer, 1975); Signals and Systems (with A. S. Willsky, 1983, 2nd ed. 1997); Discrete-Time Signal Processing (with R. W. Schafer, 1989); and Computer-Based Exercises for Signal Processing (with C. S. Burrus, et al., 1994).




digital processing of speech signals rabiner solution manual | updated hit



Yes. There were some interesting things. One of them was that signal processing at that time was essentially analog. People built signal processing systems with analog components; resistors, inductors, capacitors, op amps, and things like that. My thesis advisor was actually Amar Bose, and when Tom Stockham came back from the Air Force, he hooked up with Amar. The thing that Bose wanted to do was very sophisticated acoustic modeling for room acoustics. What Tom was suggesting was to do this on a computer. Basically people didn't use computers for signal processing other than for applications like oil exploration. In those instances, you would go off-line, collect a lot of data, take it back to the lab, and spend thousands of hours and millions of dollars processing it. The data is very expensive to collect and you're not trying to do things in real time; you are willing to do them off-line. Actually, I remember when Tom was doing this stuff with Amar and he was in what is now called the Laboratory of Computer Science. Generally the computer and signal processing people said that computers are not for signal processing, they are for inverting matrices or things like that. They said, "You don't do signal processing on a computer. "The computer at that point was not viewed as a realistic way of processing signals. The other thing that was going on around that time was that Ben Gold and Charlie Rader at Lincoln were working on vocoders. Vocoders were analog devices that were built for speech compression. But there was the notion that you could simulate these filters on a computer before you actually built them, so that you knew what parameters you wanted. So there was that aspect where techniques for processing of signals digitally was starting to emerge. But even there, the initial thought was more simulation than actual implementation.


Well, I would say a couple of things about that. One is that computer people could not see the connection between computers and signal processing. I remember speaking to people in the computer field in the early to mid-'60s about computers and signal processing, and the reaction you would get is, "Well computer people have to know some signal processing because you might need to put a scope on the line to see if you're getting pulses that are disturbing the computer." But the notion that you actually wanted to process signals with computers just didn't connect. I remember very specifically when Tom Stockham was on Project Mac, and he was experiencing tremendous frustration. People in the computer area had no appreciation for what he was doing with the computers for signal processing. Did computers generate a lot of excitement in the signal processing community? I guess what I would say that there was a very small group of people in the early '60s who felt that something significant was happening here. You could implement signal processing algorithms with computers that were impossible or too hard to do with analog hardware. An example of that was this whole notion of homomorphic signal processing. It was nonlinear and you couldn't realistically do it with analog stuff. In fact, that's why Tom Stockham got excited, because he saw that the stuff I had done my thesis on could be done on a computer. The excitement really got generated when the Cooley-Tukey paper came out. When the FFT hit, then there was a big explosion, because then you could see that by using a computer you could do some things incredibly efficiently. You could start thinking about doing things in real time. You could think of it as the difference between BC and AD. The birth of the FFT was a very significant event. Jim Kaiser recognized the value of using computers for signal processing even earlier than what we're talking about, although his focus was on digital filters. Jim's field was feedback control. In fact, I took a course on feedback systems from him when he was an instructor here. What happened in the control community is that it gravitated to something called sample data control where essentially what you do is you sample things and then in the feedback path you've got digital filters. Jim got into signal processing through that route and worked for quite a bit of time on digital filter design issues. That was kind of another thread into it. Then the community started to evolve. There was Tom Stockham, Charlie Rader, and Ben Gold at Lincoln Laboratory. I got involved in Lincoln Laboratory shortly after I graduated, through Tom and Ben. There was Larry Rabiner who was finishing his thesis with Ben Gold and who then went to Bell Labs. When Ron Schafer graduated, he went to Bell Labs and there was this kernel there. Some people would disagree with this view, but I would say that a lot grew out of the activity at Lincoln through Ben, Charlie, Tom, and myself. The activity at Bell Labs came through Ron, Larry, Jim Flanagan, and Jim Kaiser. Then there was activity at IBM after the Cooley-Tukey algorithm exploded.


Well, the thing that I recognized at the time was that there was no book like it. There was no course like it, either, and so this would be the first real textbook in this field. When we wrote it we imagined that if it were successful, then it would be the basis for courses in lots of other schools. If I was going to identify a viewpoint, I would say the following: A traditional way that a lot of people viewed digital signal processing was as an approximation to analog signal processing. In analog signal processing; the math involves derivatives and integrals; you can't really do that on the computer, so you have to approximate it. How do you approximate the integral? How do you approximate the derivative? The viewpoint that I took in the course was you start from the beginning, recognizing that you're talking about processing discrete time signals, and that where they come from is a separate issue but the mathematics for that is not an approximation to anything. Once you have things that are discrete in time, then there are things that you do with them. There are examples that I used in the course right from the beginning that clearly showed that if you took the other view it cost you a lot, that you would do silly things by taking analog systems and trying to approximate them digitally and using that as your digital signal processing. I would say that was a strong component of it. There's a new undergraduate course that I'm involved in here with a fellow named George Verghese. It will probably end up as a textbook in four or five years. When you asked the question about what vision I had back then, it made me think of the vision I have now in this new course. There's no course like it in the country, there's no textbook for that course, and what I believe is that when we end up writing that book, it will launch this course in lots of other schools.


I got involved in a lot of work on filter design, speech processing of various kinds. The nature of my research has always been solutions in search of problems, in the same spirit as what happened with my doctoral thesis. So I tend not to end up looking for problems to solve. I tend to look for intriguing threads to tug on. I like looking for paradigm shifts.


The reason why it was so significant was that, prior to the Speak & Spell, you could basically think of digital signal processing as high-end. It was funded largely by the military or by high-end industry like the seismic industry, because it was expensive to do. There were algorithms like linear predictive coding which came out of Bell Labs, done by Vishnu Atal and Manfred Schroeder, which were making a splash in the speech area. The National Security Agency was in the process of developing chips to do linear predictive coding for what's called the STU-3 or STU-2. It's their encrypted telephone. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Comments


bottom of page