Jump to content

Sub-band coding: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Mmncit (talk | contribs)
No edit summary
Mmncit (talk | contribs)
Undid revision 907733770 by Mmncit (talk)
Line 1: Line 1:
{{about|the signal coding technique|the Bluetooth audio codec|SBC (codec)}}
'''Subband coding''' is any form of [[transform coding]] that breaks a signal into a number of different [[frequency band]]s and encodes each one independently. It can be used with [[audio compression]] so that parts of the signal which the [[ear]] cannot detect are removed (e.g., a quiet sound masked by a loud one). The remaining signal is encoded using variable bit-rates with more bits per sample being used in the mid frequency range.


{{no footnotes|date = October 2011}}
For example, subband encoding is used in [[MPEG-1]].


[[File:SubBandCoding.svg|thumb|500px|Sub-band coding and decoding signal flow diagram]]
==Basic Principles==
SBC depends on a phenomenon of the human hearing system called masking. Normal human ears are sensitive to a wide range of frequencies. However, when a lot of signal energy is present at one frequency, the ear cannot hear lower energy at nearby frequencies. We say that the louder frequency masks the softer frequencies. The louder frequency is called the masker.


In [[signal processing]], '''subband coding''' ('''SBC''') is any form of [[transform coding]] that breaks a signal into a number of different [[frequency band]]s, typically by using a [[fast Fourier transform]], and encodes each one independently. This decomposition is often the first step in data compression for audio and video signals.
(Strictly speaking, what we're describing here is really called simultaneous masking (masking across frequency). There are also nonsimultaneous masking (masking across time) phenomena, as well as many other phenomena of human hearing, which we're not concerned with here. For more information about auditory perception, see the upcoming Auditory Perception OLT.)


SBC is the core technique used in many popular [[lossy audio compression]] algorithms including [[MP3]].
The basic idea of SBC is to save signal bandwidth by throwing away information about frequencies which are masked. The result won't be the same as the original signal, but if the computation is done right, human ears can't hear the difference.


==Encoding audio signals==
==Encoding audio signals==
The simplest way to encode audio signals is [[Pulse-code modulation]] (PCM), which is used on music CDs, DAT recordings, and so on. Like all digitization, PCM adds noise to the signal, which is generally undesirable. The fewer bits used in digitization, the more noise gets added. The way to keep this noise from being a problem is to use enough bits to ensure that the noise is always low enough to be masked either by the signal or by other sources of noise. This produces a high quality signal, but at a high bit rate (over 700k bps for one channel of CD audio). A lot of those bits are encoding masked portions of the signal, and are being wasted.
The simplest way to digitally encode audio signals is [[pulse-code modulation]] (PCM), which is used on [[audio CDs]], [[Digital Audio Tape|DAT]] recordings, and so on. Digitization transforms continuous signals into discrete ones by sampling a signal's amplitude at uniform intervals and [[rounding]] to the nearest value representable with the available [[Audio bit depth|number of bits]]. This process is fundamentally inexact, and involves two errors: ''[[discretization error]],'' from sampling at intervals, and ''[[quantization error]],'' from rounding.


The more bits used to represent each sample, the finer the granularity in the digital representation, and thus the smaller the quantization error. Such ''quantization errors'' may be thought of as a type of noise, because they are effectively the difference between the original source and its binary representation. With PCM, the audible effects of these errors can be mitigated with [[dither]] and by using enough bits to ensure that the noise is low enough to be masked either by the signal itself or by other sources of noise. A high quality signal is possible, but at the cost of a high [[bitrate]] (e.g., over 700 [[kbit/s]] for one channel of CD audio). In effect, many bits are wasted in encoding masked portions of the signal because PCM makes no assumptions about how the human ear hears.
There are more clever ways of digitizing an audio signal, which can save some of that wasted bandwidth. A classic method is nonlinear PCM, such as [[mu-law]] encoding (named after a perceptual curve in auditory perception research). This is like PCM on a logarithmic scale, and the effect is to add noise that is proportional to the signal strength. Sun's .au format for sound files is a popular example of mu-law encoding. Using 8-bit mu-law encoding would cut our one channel of CD audio down to about 350k bps, which is better but still pretty high, and is often audibly poorer quality than the original (this scheme doesn't really model masking effects).


Coding techniques reduce bitrate by exploiting known characteristics of the auditory system. A classic method is nonlinear PCM, such as the [[μ-law algorithm]]. Small signals are digitized with finer granularity than are large ones; the effect is to add noise that is proportional to the signal strength. Sun's [[Au file format]] for sound is a popular example of mu-law encoding. Using 8-bit mu-law encoding would cut the per-channel bitrate of CD audio down to about 350 kbit/s, half the standard rate. Because this simple method only minimally exploits masking effects, it produces results that are often audibly inferior compared to the original.
==A basic SBC scheme==


==Basic principles==
The utility of SBC is perhaps best illustrated with a specific example. When used for audio compression, SBC exploits [[auditory masking]] in the [[auditory system]]. Human ears are normally sensitive to a wide range of frequencies, but when a sufficiently loud signal is present at one frequency, the ear will not hear weaker signals at nearby frequencies. We say that the louder signal masks the softer ones.


The basic idea of SBC is to enable a data reduction by discarding information about frequencies which are masked. The result differs from the original signal, but if the discarded information is chosen carefully, the difference will not be noticeable, or more importantly, objectionable.
[[Image:sbc.fig1.gif]]


Most SBC encoders use a structure like this. First, a time-frequency mapping (a filter bank, or FFT, or something else) decomposes the input signal into subbands. The psychoacoustic model looks at these subbands as well as the original signal, and determines masking thresholds using psychoacoustic information. Using these masking thresholds, each of the subband samples is quantized and encoded so as to keep the quantization noise below the masking threshold. The final step is to assemble all these quantized samples into frames, so that the decoder can figure it out without getting lost.
First, a digital filter bank divides the input signal spectrum into some number (e.g., 32) of subbands. The psychoacoustic model looks at the energy in each of these subbands, as well as in the original signal, and computes masking thresholds using psychoacoustic information. Each of the subband samples is quantized and encoded so as to keep the quantization noise below the dynamically computed masking threshold. The final step is to format all these quantized samples into groups of data called frames, to facilitate eventual playback by a decoder.


Decoding is easier, since there is no need for a psychoacoustic model. The frames are unpacked, subband samples are decoded, and a frequency-time mapping turns them back into a single output audio signal.
Decoding is much easier than encoding, since no psychoacoustic model is involved. The frames are unpacked, subband samples are decoded, and a frequency-time mapping reconstructs an output audio signal.


==Applications==
This is a basic, generic sketch of how SBC works. Notice that we haven't looked at how much computation it takes to do this. For practical systems that need to run in real time, computation is a major issue, and is usually the main constraint on what can be done.
Beginning in the late 1980s, a standardization body, the [[Moving Picture Experts Group]] (MPEG), developed standards for coding of both audio and video. Subband coding resides at the heart of the popular MP3 format (more properly known as [[MPEG-1 Audio Layer III]]), for example.


Sub-band coding is used in the [[G.722]] codec which uses sub-band adaptive differential pulse code modulation (SB-[[ADPCM]]) within a bit rate of 64 kbit/s. In the SB-ADPCM technique, the frequency band is split into two sub-bands (higher and lower) and the signals in each sub-band are encoded using ADPCM.
Over the last five to ten years, SBC systems have been developed by many of the key companies and laboratories in the audio industry. Beginning in the late 1980's, a standardization body of the ISO called the Motion Picture Experts Group (MPEG) developed generic standards for coding of both audio and video.


==References==
== External links ==
{{FOLDOC}}
* [http://www.otolith.com/otolith/olt/sbc.html Sub-Band Coding Tutorial]


==External links==
* [https://web.archive.org/web/20070613152917/http://www.otolith.com/otolith/olt/sbc.html Sub-Band Coding Tutorial]


{{Compression Methods}}

{{FOLDOC}}


[[Category:Data compression]]
[[Category:Data compression]]
[[Category:Audio engineering]]
[[Category:Audio engineering]]
[[Category:Signal processing]]
[[Category:Signal processing]]

{{context}}

[[de:Subband Coding]]

Revision as of 22:25, 24 July 2019

Sub-band coding and decoding signal flow diagram

In signal processing, subband coding (SBC) is any form of transform coding that breaks a signal into a number of different frequency bands, typically by using a fast Fourier transform, and encodes each one independently. This decomposition is often the first step in data compression for audio and video signals.

SBC is the core technique used in many popular lossy audio compression algorithms including MP3.

Encoding audio signals

The simplest way to digitally encode audio signals is pulse-code modulation (PCM), which is used on audio CDs, DAT recordings, and so on. Digitization transforms continuous signals into discrete ones by sampling a signal's amplitude at uniform intervals and rounding to the nearest value representable with the available number of bits. This process is fundamentally inexact, and involves two errors: discretization error, from sampling at intervals, and quantization error, from rounding.

The more bits used to represent each sample, the finer the granularity in the digital representation, and thus the smaller the quantization error. Such quantization errors may be thought of as a type of noise, because they are effectively the difference between the original source and its binary representation. With PCM, the audible effects of these errors can be mitigated with dither and by using enough bits to ensure that the noise is low enough to be masked either by the signal itself or by other sources of noise. A high quality signal is possible, but at the cost of a high bitrate (e.g., over 700 kbit/s for one channel of CD audio). In effect, many bits are wasted in encoding masked portions of the signal because PCM makes no assumptions about how the human ear hears.

Coding techniques reduce bitrate by exploiting known characteristics of the auditory system. A classic method is nonlinear PCM, such as the μ-law algorithm. Small signals are digitized with finer granularity than are large ones; the effect is to add noise that is proportional to the signal strength. Sun's Au file format for sound is a popular example of mu-law encoding. Using 8-bit mu-law encoding would cut the per-channel bitrate of CD audio down to about 350 kbit/s, half the standard rate. Because this simple method only minimally exploits masking effects, it produces results that are often audibly inferior compared to the original.

Basic principles

The utility of SBC is perhaps best illustrated with a specific example. When used for audio compression, SBC exploits auditory masking in the auditory system. Human ears are normally sensitive to a wide range of frequencies, but when a sufficiently loud signal is present at one frequency, the ear will not hear weaker signals at nearby frequencies. We say that the louder signal masks the softer ones.

The basic idea of SBC is to enable a data reduction by discarding information about frequencies which are masked. The result differs from the original signal, but if the discarded information is chosen carefully, the difference will not be noticeable, or more importantly, objectionable.

First, a digital filter bank divides the input signal spectrum into some number (e.g., 32) of subbands. The psychoacoustic model looks at the energy in each of these subbands, as well as in the original signal, and computes masking thresholds using psychoacoustic information. Each of the subband samples is quantized and encoded so as to keep the quantization noise below the dynamically computed masking threshold. The final step is to format all these quantized samples into groups of data called frames, to facilitate eventual playback by a decoder.

Decoding is much easier than encoding, since no psychoacoustic model is involved. The frames are unpacked, subband samples are decoded, and a frequency-time mapping reconstructs an output audio signal.

Applications

Beginning in the late 1980s, a standardization body, the Moving Picture Experts Group (MPEG), developed standards for coding of both audio and video. Subband coding resides at the heart of the popular MP3 format (more properly known as MPEG-1 Audio Layer III), for example.

Sub-band coding is used in the G.722 codec which uses sub-band adaptive differential pulse code modulation (SB-ADPCM) within a bit rate of 64 kbit/s. In the SB-ADPCM technique, the frequency band is split into two sub-bands (higher and lower) and the signals in each sub-band are encoded using ADPCM.

References

This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.