128 bits per second. Learn more about audio conversion settings

Pros and cons MP3 128 kbps

Compressing audio data is tricky. Nothing can be said in advance ... The most common format today - MPEG Layer3 with a stream of 128 kbps - provides quality that at first glance does not differ from the original. It is called lightly - "CD-quality". However, almost everyone knows that many people turn up their noses at such "CD quality". What is wrong? Why is this quality not enough? A very difficult question. I myself am against 128 kb compression, because the result sometimes turns out to be stupid. But I have a number of 128 kb records that I can hardly find fault with. Whether stream 128 is suitable for encoding this or that material - it turns out, unfortunately, only after listening to the result multiple times. I can’t say anything in advance - I personally don’t know the signs that would allow me to determine in advance the success of the result. But often stream 128 is completely enough for high-quality music encoding.

For 128 kbps encoding, it is best to use Fraunhofer MP3 Producer 2.1 or later products. Except for MP3enc 3.0, it has an annoying bug that results in very poor encoding of high frequencies. Versions above 3.0 do not suffer from this shortcoming.

First of all, some general words. The perception of the sound picture by a person depends very much on the symmetrical transmission of the two channels (stereo). Different distortions in different channels are much worse than the same ones. Generally speaking, ensuring as much as possible the same sound characteristics in both channels, but in the meantime different material (otherwise what kind of stereo is it) is a big problem in sound recording, which is usually underestimated. If we can use 64 kbps for mono encoding, then 64 kbps per channel is not enough for stereo encoding in the mode of just two channels - the stereo result will sound much more incorrect than each channel separately. Most Fraunhofer products generally limit mono to 64 kbps - and I have not yet seen a mono recording (clean recording - no noise or distortion) that would require a higher stream. For some reason, our addictions to monophonic sound are for some reason much weaker than to stereophonic - apparently, we just don’t take it seriously :) - from a psychoacoustic point of view, it is just a sound coming from a speaker, and not an attempt to completely transmit some kind of paintings.

Trying to transmit stereo signals puts forward much more stringent requirements - after all, have you ever heard of a psychoacoustic model that takes into account the masking of one channel by another? Also, some reverse, let's say, effects are ignored - for example, a certain stereo effect that is designed for both channels at once. A single left channel masks its part of the effect in itself - we will not hear it. But the presence of the right channel - the second part of the effect - changes our perception of the left channel: we subconsciously expect to hear the left side of the effect more, and this change in our psychoacoustics must also be taken into account. With low compression - 128 kbps per channel (total 256 kbps), these effects disappear, since each channel is presented quite fully to cover the need for transmission symmetry with a margin, but for streams of about 64 kbps per channel, this is a big problem - the transfer of subtle nuances of joint the perception of both channels requires a more accurate transmission than is currently possible in such streams.

It was possible, of course, to make a full-fledged acoustic model for two channels, but the industry took a different path, which is generally equivalent to this, but much simpler. A set of algorithms with the general name Joint Stereo is a partial solution to the problems described above. Most of the algorithms come down to highlighting the center channel and the difference channel - mid/side stereo. The central channel carries the main audio information and is a regular mono channel formed from two original channels, while the difference channel carries the rest of the information that allows you to restore the original stereo sound. By itself, this operation is completely reversible - it's just a different way of representing the two channels, which is easier to work with when compressing stereo information.

Next, the central and differential channels are usually compressed separately, using the fact that the differential channel in real music is relatively poor - both channels have a lot in common. The balance of compression in favor of the center and differential channel is selected on the fly, but in general a much larger flow is allocated to the center channel. Complex algorithms decide what is preferable for us at the moment - a more correct spatial picture or the transmission quality of information common to both channels, or simply compression without mid / side stereo - that is, in dual channel mode.

Oddly enough, but stereo compression is the weakest point of the compression result in Layer3 128 kbps. It is impossible to criticize the creators of the format - this is still the lesser possible evil. Subtle stereo information is almost not perceived consciously (if we do not take into account obvious things - the rough arrangement of instruments in space, artificial effects, etc.), so the stereo quality is the last thing a person evaluates. Usually, something always prevents you from getting to this: computer speakers, for example, introduce much more significant flaws, and it simply does not reach such subtleties as incorrect transmission of spatial information.

You should not think that what prevents you from hearing this shortcoming on computer acoustics is that the speakers are spaced at a distance of 1 meter, on the sides of the monitor, without creating a sufficient stereo base. That's not even the point. you will never be able to isolate the exact spatial arrangement of sounds (this is not a sound picture, which, on the contrary, computer speakers will never build, but a direct, conscious, perception of the difference between channels). Computer speakers (in standard use) or headphones provide a much clearer direct stereo experience than conventional music speakers.

To put it bluntly, for direct, informative and cognitive perception of sound, we do not really need accurate stereo information. It is quite difficult to directly detect the difference in this aspect between the original and Layer3 128 kbps, although it is possible. You need either a lot of experience, or an increase in the effects of interest. The simplest thing that can be done is to virtually spread the channels further than is physically possible. Usually it is this effect that is turned on in cheap computer technology with the "3D Sound" button. Or in boom boxes, the speakers of which do not separate from the body of the device and are spaced too weakly to transmit beautiful stereo in a natural way. There is a transition of spatial information into the specific audio information of both channels - the difference between the channels increases.

I applied a stronger effect than usual to hear the difference better. See how it should sound after encoding at 256 kbps with a dual channel (256_channels_wide.mp3 , 172 kB), and how it sounds after encoding at 128 kbps with joint stereo (128_channels_wide.mp3 , 172 kB).

Retreat. Both of these files are 256 kbps mp3s encoded with mp3 Producer 2.1. Don't get confused: I, firstly, test mp3, and secondly, I post the results of testing mp3 to mp3 ;). It was like this: first I encoded a piece of music in 128 and 256. Then I decompressed these files, applied processing (stereo expander), compressed in 256 - just to save space - and posted it here.

By the way, only at 256 kbps in mp3 Producer 2.1 does joint stereo turn off and dual channels turn on - two independent channels. Even 192 kbps in Producer 2.1 is some kind of joint stereo, because my examples were very incorrectly compressed into less than 256 kbps stream. This is the main reason that "full" quality starts at 256 kbps - historically, any lower stream in standard commercial products from Fraunhofer (before 98) is joint stereo, which is in any case unacceptable for a completely correct transmission. Other (or later) products, in principle, allow you to arbitrarily choose - joint stereo or dual channel - for any stream.

About results

In the original (which in this case corresponds exactly to 256 kbps), we heard the sound with the difference channel amplified and the center channel weakened. The reverberation of the voice was very well heard, as well as all sorts of artificial reverberations and echoes in general - these spatial effects go mainly to the difference channel. To be specific, in this case there were 33% of the central channel and 300% of the difference. The absolute effect - 0% of the central channel - is turned on on equipment such as musical centers with a button like "karaoke vocal fader", "voice cancelation / remove" or similar, the meaning of which is to remove the voice from the phonogram. The meaning of the operation is that the voice is usually recorded only on the central channel - the same presence in the left and right channels. By removing the center channel, we remove the voice (and much more, so this function in real life pretty useless). If you have such a thing - you can listen to your mp3s with it yourself - you get a funny joint stereo detector.

In this example, we can already indirectly understand what we have lost. Firstly, all spatial effects became noticeably worse - they were simply lost. But secondly, gurgling is the result of the transition of spatial information into sound. What did it correspond to in space - yes, just all the time almost randomly moving sound components, some kind of "spatial noise" that was not in the original phonogram (it withstands at least a complete transition of spatial information into sound without the appearance of extraneous effects). It is known that this type of distortion when encoding to low streams often appears directly, without any additional treatments. It’s just that direct sound distortions (which are almost always absent) are perceived consciously and immediately, while stereophonic ones (which are always and in large quantities with joint stereo) are only subconsciously and in the process of listening for some time.

This is the main reason why Layer3 128 kbps sound is not considered full CD quality. The fact is that turning a stereo sound into mono in itself gives strong negative effects - often the same sound is repeated in different channels with a slight delay, which, when mixed, simply gives a sound that is blurred in time. Mono sound made from stereo sounds much worse than the original mono recording. The difference channel, in addition to the central (mixed mono) channel, gives a complete reverse separation into right and left, but the partial absence of the difference channel (insufficient encoding) brings not only an insufficient spatial picture, but also these unpleasant effects of mixing stereo sound into one mono channel.

When all other obstacles are removed - the equipment is good, the tonal coloring and dynamics are unchanged (there is enough flow to encode the center channel) - it will still remain. But there are phonograms recorded in such a way that the negative effects of compression based on mid / side stereo do not appear - and then 128 kbps gives the same full quality as 256 kbps. A special case is a phonogram, perhaps rich in terms of stereo information, but poor sound information For example, playing the piano slowly. In this case, for encoding the differential channel, a stream is allocated that is quite sufficient for transmitting accurate spatial information. There are also more difficult to explain cases - an active arrangement filled with a variety of instruments, nevertheless, sounds very good at 128 kbps - but this is rare, maybe in one case out of five to ten. However, it does occur.

Actually to the sound. It is difficult to isolate the immediate defects in the sound of the center channel in Layer3 128 kbps. The lack of transmission of frequencies above 16 kHz (by the way, they are very rare, but still transmitted) and a certain decrease in the amplitude of very high ones - strictly speaking in itself - is just nonsense. A person in a few minutes completely gets used to not such tonal distortions, it simply cannot be considered strong negative factors. Yes, these are distortions, but for the perception of "full quality" they are far of secondary importance. On the part of the central, directly audio channel, troubles of a different kind are possible - a sharp restriction of the available stream for encoding this channel, caused simply by a combination of circumstances - very abundant spatial information, a moment loaded with various sounds, frequent inefficient short blocks and, as a result of all this, a completely used up reserve stream buffer. This happens, but relatively rarely, and then - if such a fact takes place, then it is usually noticeable on large fragments continuously.

It is very difficult to show defects of this kind in an explicit form so that anyone can notice. They are easily noticed even without processing by a person who is used to dealing with sound, but for an ordinary non-critical listener it may seem completely indistinguishable from the original sound and some kind of abstract digging into something that is not really there .. Still, look at the example. To extract it, it was necessary to apply strong processing - to reduce the content of medium and high frequencies very much after decoding. By removing these frequency nuances that interfere with hearing, we, of course, disrupt the operation of the coding model, but this will help to better understand what we are losing. So - how should it sound (256_bass.mp3 , 172 kB), and what happens after decoding and processing a 128 kbps stream (128_bass.mp3 , 172 kB). Note a noticeable loss of bass continuity, smoothness, and some other anomalies. The transmission of low frequencies in this case was sacrificed in favor of higher frequencies and spatial information.

It should be noted that the operation of the acoustic compression model can be observed (with careful study and with some experience with sound) at 256 kbps, if a more or less strong equalizer is applied. If you do this and then listen, you can sometimes (quite often) notice unpleasant effects (ringing / gurgling). More importantly, the sound after such a procedure will have an unpleasant, uneven character, which is very difficult to notice immediately, but it will be noticeable with prolonged listening. The only difference between 128 and 256 is that in a 128 kbps stream, these effects often exist without any processing. They are also difficult to notice right away, but they are there - the bass example gives some idea of ​​\u200b\u200bwhere to look for them. It is simply impossible to hear this in high streams (above 256 kbps) without processing. This problem does not apply to high streams, but there is something that sometimes (very rarely) does not allow even Layer3 to be counted - 256 kbps of the original - these are time parameters (more details will be in a separate article later: see MPEG Layer3 - 256 / link to another article/).

There are phonograms that are not affected by this problem. The easiest way is to list the factors that, on the contrary, lead to the appearance of the above distortions. If none of them is done, there is a great chance for a completely successful, in this aspect, encoding in Layer3 - 128 kbps. It all depends, however, on the specific material...

First of all - noise, let's say, hardware. If the phonogram is noticeably noisy, it is very undesirable to encode it into small streams, since too much of the stream is used to encode unnecessary information, which, moreover, is not very amenable to reasonable coding using an acoustic model.

  • Just noise - all sorts of extraneous sounds. The monotonous noise of the city, street, restaurant, etc., against which the main action takes place. These types of sounds provide a very rich stream of information that needs to be encoded, and the algorithm will have to sacrifice something in the base material.
  • Unnatural strong stereo effects. This is rather related to the previous point, but in any case, too much of the stream goes to the differential channel, and the coding of the central channel is greatly degraded.
  • Strong phase distortion, different for different channels. In principle, this refers rather to the flaws common in given time coding algorithms than to the standard, but still. The wildest distortions begin due to the complete disruption of the entire process. In most cases, recording on cassette equipment and subsequent digitization lead to such distortions of the original phonogram, especially when played by inexpensive tape recorders with poor-quality reverse. The heads are crooked, the tape is wound obliquely, and the channels are slightly delayed one relative to the other.
  • It's just too overloaded. Quite roughly speaking - a large symphony orchestra plays all at once :). Usually, as a result of compression at 128 kbps, something very schematic is obtained - chamber, brass, drums, soloist. It occurs, of course, not only in the classics.

The other pole is something that usually compresses well:

  • A solo instrument with a relatively simple sound - guitar, piano. The violin, for example, has an overly full spectrum and usually doesn't sound very good. The work itself actually depends on the violinist's violin. Several instruments are also usually compressed quite well - bards or CSP, for example (instrument + voice).
  • High-quality modern production of music. I mean not the musical quality, but the quality of the sound - mixing, arrangement of instruments, the categorical absence of complex global effects, decorating sounds and, in general, anything superfluous. In this category, for example, all modern pop music easily falls, as well as some rock, and in general quite a lot of everything.
  • Aggressive, "electric" music. Well, to somehow give an example - early Metallica (and modern, in general, too). [remember, this is not about musical styles! just an example.]

It is worth noting that Layer3 compression is almost unimpressed by parameters such as the presence / absence of high frequencies, bass, dull / sonorous coloring, etc. There is a dependence, but it is so weak that it can be ignored.

Unfortunately (or fortunately?), the matter rests on the person himself. Many people without preparation and preliminary selection hear the difference between streams of about 128 kbps and the original, many even synthetic extreme examples do not perceive by ear as differences. The former do not need to be convinced of anything, while the latter cannot be convinced by such examples ... One could simply say that there is a difference for some, and not for others, if not for one thing: in the process of listening to music, over time, our perception time is improving. What seemed good quality yesterday, tomorrow may no longer seem like it - it always happens. And if it is rather pointless (at least in my opinion) to compress at 320 kbps compared to 256 kbps - the gain is no longer very important, although it is understandable, then storing music at least at 256 kbps is still worth it.

Length and Distance Converter Mass Converter Bulk Food and Food Volume Converter Area Converter Volume and Recipe Units Converter Temperature Converter Pressure, Stress, Young's Modulus Converter Energy and Work Converter Power Converter Force Converter Time Converter Linear Velocity Converter Flat Angle Converter Thermal Efficiency and Fuel Efficiency Number Converter to various systems calculus Converter of units of measurement of the amount of information Exchange rates Sizes women's clothing and Shoe Size menswear Angular velocity and rotational speed converter Acceleration converter Angular acceleration converter Density converter Specific volume converter Moment of inertia converter Moment of force converter Torque converter Thermal Expansion Coefficient Converter Thermal Resistance Converter Thermal Conductivity Converter Specific Heat Capacity Converter Energy Exposure and Radiant Power Converter Heat Flux Density Converter Heat Transfer Coefficient Converter Volume Flow Converter Mass Flow Converter Molar Flow Converter Mass Flux Density Converter Molar Concentration Converter Converter mass concentration in solution Dynamic (Absolute) Viscosity Converter Kinematic Viscosity Converter Surface Tension Converter Vapor Permeability Converter Water Vapor Flux Density Converter Sound Level Converter Microphone Sensitivity Converter Sound Pressure Level (SPL) converter Resolution converter to computer graphics Frequency and Wavelength Converter Diopter Power and Focal Length Diopter Power and Lens Magnification (×) Electric Charge Converter Linear Charge Density Converter Surface Charge Density Converter Volume Charge Density Converter Electric Current Converter Linear Current Density Converter Surface Current Density Converter Tension Converter Electrical Field Electrostatic Potential and Voltage Converter Electrical Resistance Converter Electrical Resistivity Converter Electrical Conductivity Converter Electrical Conductivity Converter Capacitance Inductance Converter American Wire Gauge Converter Levels in dBm (dBm or dBmW), dBV (dBV), watts, etc Force Tension Converter magnetic field Magnetic Flux Converter Magnetic Induction Converter Radiation. Ionizing Radiation Absorbed Dose Rate Converter Radioactivity. Radioactive Decay Converter Radiation. Exposure Dose Converter Radiation. Absorbed Dose Converter Decimal Prefix Converter Data Transfer Typography and Image Processing Unit Converter Timber Volume Unit Converter Calculation of Molar Mass Periodic Table of Chemical Elements by D. I. Mendeleev

1 byte per second [B/s] = 8 bits per second [b/s]

Initial value

Converted value

bits per second byte per second kilobits per second (metric) kilobytes per second (metric) kibibits per second kibibytes per second megabits per second (metric) megabytes per second (metric) mebibits per second mebibytes per second gigabits per second (metric) gigabytes second (metric) gibibits per second gibibits per second terabytes per second (metric) terabytes per second (metric) tebibits per second tebibytes per second Ethernet 10BASE-T Ethernet 100BASE-TX (fast) Ethernet 1000BASE-T (gigabit) Optical carrier 1 Optical carrier 3 Optical carrier 12 Optical carrier 24 Optical carrier 48 Optical carrier 192 Optical carrier 768 ISDN (single channel) ISDN (dual channel) modem (110) modem (300) modem (1200) modem (2400) modem (9600) modem (14.4) k) modem (28.8k) modem (33.6k) modem (56k) SCSI (asynchronous mode) SCSI (synchronous mode) SCSI (Fast) SCSI (Fast Ultra) SCSI (Fast Wide) SCSI (Fast Ultra Wide) SCSI (Ultra- 2) SCSI (Ultra-3) SCSI (LVD Ultra80) SC SI (LVD Ultra160) IDE (PIO mode 0) ATA-1 (PIO mode 1) ATA-1 (PIO mode 2) ATA-2 (PIO mode 3) ATA-2 (PIO mode 4) ATA/ATAPI-4 (DMA mode 0) ATA/ATAPI-4 (DMA mode 1) ATA/ATAPI-4 (DMA mode 2) ATA/ATAPI-4 (UDMA mode 0) ATA/ATAPI-4 (UDMA mode 1) ATA/ATAPI-4 (UDMA mode 2) ATA/ATAPI-5 (UDMA mode 3) ATA/ATAPI-5 (UDMA mode 4) ATA/ATAPI-4 (UDMA-33) ATA/ATAPI-5 (UDMA-66) USB 1.X FireWire 400 ( IEEE 1394-1995) T0 (complete signal) T0 (B8ZS total signal) T1 (desired signal) T1 (complete signal) T1Z (complete signal) T1C (desired signal) T1C (complete signal) T2 (desired signal) T3 (desired signal ) T3 (complete signal) T3Z (complete signal) T4 (desired signal) Virtual Tributary 1 (desired signal) Virtual Tributary 1 (complete signal) Virtual Tributary 2 (desired signal) Virtual Tributary 2 (complete signal) Virtual Tributary 6 (desired signal) ) Virtual Tributary 6 (complete signal) STS1 (desired signal) STS1 (complete signal) STS3 (desired signal) STS3 (complete signal) STS3c (desired signal) STS3c (complete signal) STS12 (wanted signal) STS24 (wanted signal) STS48 (wanted signal) STS192 (wanted signal) STM-1 (wanted signal) STM-4 (wanted signal) STM-16 (wanted signal) STM-64 (wanted signal) USB 2 .X USB 3.0 USB 3.1 FireWire 800 (IEEE 1394b-2002) FireWire S1600 and S3200 (IEEE 1394-2008)

How to take care of your glasses and lenses

Learn more about data transfer

General information

The data can be either digital or analog. Data transmission can also take place in one of these two formats. If both the data and the method of their transmission are analog, then the data transmission is analog. If either the data or the transmission method is digital, then the data transmission is called digital. In this article, we will talk specifically about digital data transmission. Nowadays, digital data transmission is increasingly used and stored in digital format, as this allows speeding up the transmission process and increasing the security of information exchange. Apart from the weight of the devices needed to send and process the data, the digital data itself is weightless. Replacing analog data with digital data helps facilitate the exchange of information. Data in digital format is more convenient to take with you on the road, because compared to data in analog format, for example on paper, digital data does not take up space in luggage, except for the carrier. Digital data allows users with access to the Internet to work in a virtual space from anywhere in the world where the Internet is available. Multiple users can work with digital data at the same time by accessing the computer on which it is stored and using the remote administration programs described below. Various Internet applications such as Google Docs, Wikipedia, forums, blogs, and others also allow users to collaborate on a single document. That is why the transmission of data in digital format is so widely used. IN Lately Eco-friendly and “green” offices are becoming popular, where they are trying to move to paperless technology in order to reduce the carbon footprint of the company. This made the digital format even more popular. The statement that by getting rid of paper, we will significantly reduce energy costs is not entirely correct. In many cases, this opinion is inspired by the advertising companies of those who benefit from more people switched to paperless technologies, for example, computer and software manufacturers. It also benefits those who provide services in this area, such as cloud computing. In fact, these costs are almost equal, since running computers, servers, and supporting the network requires a large amount of energy, which is often obtained from non-renewable sources, such as burning fossil fuels. Many hope that paperless technology will indeed be more cost effective in the future. In everyday life, people also began to work more often with digital data, for example, preferring e-books and tablets to paper ones. Large companies often announce in press releases that they are going paperless to show that they care about the environment. As described above, sometimes this is just a publicity stunt, but despite this, more and more companies are paying attention to digital information.

In many cases, the sending and receiving of data in digital format is automated, and the bare minimum is required from users for such data exchange. Sometimes they just need to press a button in the program they created the data in, such as when sending an email. This is very convenient for users, since most of the data transfer work takes place behind the scenes, in data centers. This work includes not only the direct processing of data, but also the creation of infrastructures for their rapid transmission. For example, in order to provide fast communication over the Internet, an extensive system of cables is laid along the ocean floor. The number of these cables is gradually increasing. Such deep-sea cables cross the bottom of each ocean several times and are laid through the seas and straits in order to connect countries with access to the sea. Laying and maintaining these cables is just one example of working behind the scenes. In addition, such work includes providing and maintaining communications in data centers and ISPs, maintaining servers by hosting companies, and ensuring the smooth operation of websites by administrators, especially those that allow users to transfer data in large volumes, for example forwarding mail, downloading files, publishing materials, and other services.

The following conditions are necessary for the transmission of data in digital format: the data must be correctly encoded, that is, in correct format; you need a communication channel, a transmitter and a receiver, and, finally, protocols for data transmission.

Encoding and sampling

The available data is encoded so that the receiving party can read and process it. Encoding or converting data from analog to digital format is called sampling. Most often, data is encoded in the binary system, that is, information is presented as a series of alternating ones and zeros. After the data is encoded in binary, it is transmitted as electromagnetic signals.

If data in analog format needs to be transmitted over a digital channel, they are sampled. So, for example, analog telephone signals from a telephone line are encoded into digital ones in order to transmit them over the Internet to a recipient. In the discretization process, the Kotelnikov theorem is used, which in English is called the Nyquist-Shannon theorem, or simply the discretization theorem. According to this theorem, a signal can be converted from analog to digital without loss of quality if its maximum frequency does not exceed half the sampling frequency. Here, the sample rate is the frequency at which the analog signal is “sampled”, that is, its characteristics are determined at the time of the sample.

Signal encoding can be either secure or open access. If the signal is protected and it is intercepted by persons to whom it was not intended, then they will not be able to decode it. In this case, strong encryption is used.

Communication channel, transmitter and receiver

The communication channel provides a medium for transmitting information, and transmitters and receivers are directly involved in transmitting and receiving a signal. The transmitter consists of a device that encodes information, such as a modem, and a device that transmits data in the form of electromagnetic waves. This can be, for example, the simplest device in the form of an incandescent lamp that transmits messages using Morse code, and a laser, and an LED. To recognize these signals, you need a receiving device. Examples of receiving devices are photodiodes, photoresistors, and photomultipliers that detect light signals, or radio receivers that receive radio waves. Some of these devices only work with analog data.

Communication protocols

Data transfer protocols are like a language in that they communicate between devices during data transfer. They also recognize errors that occur during this transfer and help resolve them. An example of a widely used protocol is the Transmission Control Protocol, or TCP (from the English Transmission Control Protocol).

Application

Digital transmission is important because without it it would be impossible to use computers. Below are some interesting examples of the use of digital data transmission.

IP telephony

IP telephony, also known as voice over IP (VoIP) telephony, has recently gained popularity as an alternative form of telephone communication. The signal is transmitted over a digital channel, using the Internet instead of a telephone line, which allows you to transmit not only sound, but also other data, such as video. Examples of the largest providers of such services are Skype (Skype) and Google Talk. Recently, the LINE program created in Japan has been very popular. Most providers provide audio and video calling services between computers and smartphones connected to the Internet for free. Additional services, such as calls from a computer to a phone, are provided for an additional fee.

Working with a thin client

Digital data transfer helps companies not only simplify the storage and processing of data, but also work with computers within the organization. Sometimes companies use part of the computers for simple calculations or operations, such as Internet access, and the use of ordinary computers in this situation is not always advisable, since computer memory, power, and other parameters are not fully utilized. One solution to this situation is to connect such computers to a server that stores data and runs programs that these computers need to work. In this case, computers with simplified functionality are called thin clients. They should only be used for simple tasks, such as accessing a library catalog or using simple programs such as cash register, which record information about the sale in the database, and also knock out checks. Typically, a thin client user works with a monitor and keyboard. The information is not processed on the thin client, but sent to the server. The convenience of a thin client is that it gives the user remote access to the server through the monitor and keyboard, and it does not need a powerful microprocessor, hard disk, and other hardware.

In some cases, special equipment is used, but often a tablet computer or a monitor and keyboard from a regular computer is enough. The only information processed by the thin client itself is the system interface; all other data is processed by the server. It is interesting to note that sometimes ordinary computers, on which, unlike a thin client, process data, are called thick clients.

Using thin clients is not only convenient, but also profitable. Installing a new thin client does not require large expenses, since it does not require expensive software and hardware, such as memory, hard drive, processor, software, and others. In addition, hard drives and processors stop working in too dusty, hot or cold rooms, as well as high humidity and other adverse conditions. When working with thin clients, favorable conditions are needed only in the server room, since thin clients do not have processors and hard drives, and monitors and input devices work fine in more difficult conditions.

The disadvantage of thin clients is that they do not work well if you need to update frequently. GUI, such as for videos and games. It is also problematic that if the server stops working, then all thin clients connected to it will also not work. Despite these shortcomings, companies are increasingly using thin clients.

Remote administration

Remote administration is similar to working with a thin client in that a computer that has access to the server (client) can store and process data, and use programs on the server. The difference is that the client in this case is usually "fat". In addition, thin clients are most often connected to a local network, while remote administration takes place via the Internet. Remote administration has many uses, such as allowing people to work remotely on a company server, or on their own home server. Companies that do part of the work in remote offices or cooperate with third parties, may provide access to information to such offices through remote administration. This is convenient if, for example, customer support work takes place in one of these offices, but all company personnel need access to the customer database. Remote administration is usually secure and it is not easy for outsiders to access servers, although there is sometimes a risk of unauthorized access.

Do you find it difficult to translate units of measurement from one language to another? Colleagues are ready to help you. Post a question to TCTerms and within a few minutes you will receive an answer.

Today, the Internet is needed in every home no less than water or electricity. And in every city there are a lot of companies or small firms that can provide people with access to the Internet.

The user can choose any package for using the Internet from a maximum of 100 Mbps to a low speed, for example, 512 kbps. How to choose the right speed and the right Internet provider for yourself?

Of course, the Internet speed must be chosen based on what you do online and how much you are willing to pay per month for Internet access. From my own experience, I want to say that the speed of 15 Mbps suits me quite well as a person who works on the network. Working on the Internet, I have 2 browsers enabled, and each has 20-30 tabs open, while problems arise more from the computer side (to work with a large number of tabs, you need a lot of RAM and a powerful processor) than from the Internet speed. The only moment when you have to wait a bit is the moment the browser is first launched, when all the tabs are loaded at the same time, but usually it takes no more than a minute.

1. What do internet speed values ​​mean

Many users confuse Internet speed values ​​​​thinking that 15Mb / s is 15 megabytes per second. In fact, 15Mb / s is 15 megabits per second, which is 8 times less than megabytes, and at the output we will get about 2 megabytes of download speed for files and pages. If you usually download movies for viewing with a size of 1500 Mb, then at a speed of 15 Mbps the movie will be downloaded in 12-13 minutes.

We watch a lot or a little of your Internet speed

  • The speed is 512 kbps 512 / 8 = 64 kbps(this speed is not enough for watching online video);
  • The speed is 4 Mbps 4 / 8 = 0.5 MB/s or 512 kB/s(this speed is enough to watch online video in quality up to 480p);
  • The speed is 6 Mbps 6 / 8 = 0.75 Mbps(this speed is enough to watch online video in quality up to 720p);
  • The speed is 16 Mbps 16 / 8 = 2 Mbps(this speed is enough to watch online video in quality up to 2K);
  • The speed is 30 Mbps 30 / 8 = 3.75 Mbps(this speed is enough to watch online video in quality up to 4K);
  • The speed is 60 Mbps 60 / 8 = 7.5 Mbps
  • The speed is 70 Mbps 60 / 8 = 8.75 Mbps(this speed is enough to watch online video in any quality);
  • The speed is 100 Mbps 100 / 8 = 12.5 Mbps(this speed is enough to watch online video in any quality).

Many connecting the Internet are worried about the possibility of watching online video, let's see what kind of traffic movies with different quality need.

2. Internet speed required to watch online video

And here you will find out a lot or a little of your speed for watching online videos with different quality formats.

Broadcast type Video bitrate Audio bitrate (stereo) Traffic Mb/s (megabytes per second)
Ultra HD 4K 25-40 Mbps 384 kbps from 2.6
1440p (2K) 10 Mbps 384 kbps 1,2935
1080p 8000 kbps 384 kbps 1,0435
720p 5000 kbps 384 kbps 0,6685
480p 2500 kbps 128 kbps 0,3285
360p 1000 kbps 128 kbps 0,141

We see that all the most popular formats are reproduced without problems with an Internet speed of 15 Mbps. But to watch video in 2160p (4K) format, you need at least 50-60 Mbps. but there is one BUT. I don’t think that many servers will be able to distribute video of this quality while maintaining such a speed, so if you connect the Internet at 100 Mbps, you won’t be able to watch online video in 4K.

3. Internet speed for online games

Connecting home Internet, every gamer wants to be 100% sure that their Internet speed will be enough to play their favorite game. But as it turns out, online games are not at all demanding on the speed of the Internet. Consider what speed popular online games require:

  1. DOTA 2 - 512 kbps
  2. World of Warcraft - 512 kbps
  3. GTA online - 512 kbps.
  4. World of Tanks (WoT) - 256-512 kbps.
  5. Panzar - 512 kbps
  6. Counter Strike - 256-512 kbps

Important! The quality of your game online is more dependent not on the speed of the Internet, but on the quality of the channel itself. For example, if you (or your provider) receive Internet via satellite, then no matter what package you use, the ping in the game will be much higher than that of a wired channel with a lower speed.

4. Why do you need Internet more than 30 Mbps.

In exceptional cases, I might recommend using a faster connection of 50 Mbps or more. Not many providers in Kiev will be able to provide such a speed in full, Kyivstar is not the first year on this market and it inspires confidence, the more important is the stability of the connection, and I want to believe that they are on top here. A high Internet connection speed may be necessary when working with large amounts of data (downloading and uploading them from the network). Perhaps you are a fan of watching movies in excellent quality, or you download large games every day, or upload videos or work files of large volumes to the Internet. To check the connection speed, you can use various online services, and to optimize the work you need to run.

By the way, speeds of 3 Mbps and below usually make surfing the net a little unpleasant, not all online video sites work well, and downloading files is generally not happy.

Be that as it may, there are plenty to choose from in the Internet services market today. Sometimes, in addition to global providers, the Internet is offered by local firms, and often the level of their service is also on top. I am served by such a small company. The cost of services in such firms is, of course, much lower than that of large companies, but as a rule, the coverage of such firms is very small, usually within a district or two.

Publication date: 29.08.2012

One of the most famous and popular parameters when trading video cards is the memory bus width. The question - “how many bits are in the video card” haunts buyers and significantly affects the price of the accelerator, which sellers do not disdain to use. Let's give an unambiguous answer to the question about the importance of the video card memory bus width and give an example of a scale.

To begin with, we list all the options in ascending order. In the form of exotics, models of the so-called. graphics cards that have 32-bit bits :) Also, Nvidia likes to make multiples of three to create trims, although in most cases bits are always a power of two.

So the existing video memory bus widths: 32, 64, 128, 192, 256, 320, 384, 448, 512.

So how much?! Of course, the more, the better! But…

Extreme values ​​are very rare, as are multiples, except for the 192-bit bus that has gained popularity. The truth is that it is NOT the BUS CAPACITY itself that is important, but the total memory bandwidth (hereinafter referred to as the bandwidth). In other words, the speed of memory access in gigabytes per second Gb / s.

As you can see in the picture, the bandwidth of the Radeon HD 6790 video card is 134 Gb / s. But if there is no utility or you need to figure it out yourself, then this is also not difficult.

PSP = Bit rate * Memory frequency. The memory frequency should be taken as effective (double the value of DDR2/DDR3/DDR4 and quadruple for DDR5).

For our example video card, this is 1050MHz * 4 * 256 = 1075200 Mbps. Divide by 8 to get bytes (1 byte = 8 bits).

1075200/8= 134.4 Gb/s.

It is important to understand that if you have a video card with a 64-bit bus or a DDR2 memory type, then the memory bandwidth cannot be high in principle. But 128 bits is not a sentence yet! For example, the same Radeon HD 5770 with a 128-bit bus has DDR5 memory with an effective frequency of 4.8 GHz. This allows it to get 76+ Gb / s and, given the sufficiently powerful video core, a very solid video card is obtained. Counter examples can also be given. The Radeon HD 2900 XT has 512 bits! But the memory frequency is not very high, and the video core is hopelessly outdated. You won't be able to play well.

TABLE OF PSP VALUES for video cards of 2012

Before commenting on this table, it should be remembered that the performance of a video card depends primarily on, and only then on the PS memory. But, there is still some dependence. Moreover, few people think of installing a weak video chip on a video card with a high memory bandwidth, or vice versa. Although, there are.

Video cards with a memory bandwidth of less than 16Gb/s, generally speaking, are not video cards. These are plugs that will fit only to stick something into the socket and connect the monitor. You can play only the most dense games.

Memory bandwidth above 20 Gb/s has video cards with a 128-bit bus and a slow memory type. For example GT 430 Nvidia. You can play, but no more. for a new one.

Above 37 Gb / s have video cards with a bus of at least 128 bits and an effective frequency above 2.3 GHz. Those. memory type DDR4/5.

Video cards with memory bandwidth over 75 Gb/s should be classified as actual gaming cards. This level bandwidth memory can be achieved either with modern high-frequency DDR5 memory, or with a bus of 256 bits and higher. Assuming a modern video chip is used, most games will run fine at above average settings at all resolutions. For such a new video card, they will ask about $ 160, although you can find options.

A bar of 150Gb / s is taken with the obligatory presence of a bus of at least 256 bits and a modern type of video memory SIMULTANEOUSLY. Typical memory bandwidth for top-end accelerators is around 200 Gb/s. This

A memory bandwidth over 300 Gb/s can be called monstrous! A 320 GB hard drive would be copied in a second at that speed. The fastest memory at frequencies of 6 GHz and higher, as well as buses of 256 or 384 bits, are not enough here. This requires simultaneous access by several video cores via their own wide buses (at least 256 bits each). This is implemented in top-end dual-chip video cards, like or HD 7990. They look something like this ...



Such video accelerators have not only a monstrous memory bandwidth, but also a price.

In any case, do not forget that the choice of a video card begins with the type of graphics processor, because the only task of the PSP is to allow the video core to reach its potential. PSP for the core, and not vice versa.