Glitch is not just an error or a state of destruction.
To play an unexpected and unintended state by the reproducer system, it is glitch.
For example, the blue screen of Windows OS is not a glitch, because it is practically a handled error.
Or, the white noise that randomly generated is not a glitch, because it is a result of the prepared randomness.
Or, the image generated by a pixel sorting algorithm is not a glitch, because it is just an image that totally expected.
Here I'd like to notice that the term "play" in this text means just "play". Because precisely the glitch can't be played expectedly and repeatedly, it can't reword as "playback" or "reproduce".
Oval started to compose their music using sounds by playing CDs which got drawn on the underside with a marker pen.
They called the audio image of the skipped sound "glitch".
For people who were grown up with advancing of the computer, the most familiar glitch might be the corrupted sprite data with the half inserted NES cassette. For children who are growing up with the digitalized TV broadcast, the memory of glitch could be the crashed MPEG-2 image on the interference by bad weather.
The digitally damaged photo — infected by block noises and colors that the original image doesn't have — is also called a glitch. Otherwise, it is not a glitch on the situation like that unreadable bytes on the damaged data make the viewer software down or popping an error dialog up.
In the state called glitch, the reproducer system always works.
And then the reproducer system plays the unintended image.
It is both playable and unintended. The order to reproduce the original image — requested by the data or the media for the system — doesn't relayed correctly because it is damaged, and the reproducer interprets with an excursion from the intention, finally it becomes new rise of unexpected and deflected images.
Now we can define the glitch as the state that the reproducer generates the new image which was not contained in the original image.
So the "play" on glitch cannot be "playback" or "reproduce".
People who want to determine what glitch is tend to say that the "true" glitch can't be reproducible. In such a mention, they focus the mental state of who beholds the glitch. They questions whether we could feel the same surprise when the same glitch would appear again. For them, glitch is heavily connected with their mind.
On the other hand, such people making the determination also use the term "glitch art".
It will become "art" because they don't see it as an objective phenomenon but as a simulacrum in their mind.
However, the art must have some directivity for the reproducibility or something fixable.
Comparing a wild glitch with the capture of the original one, or to make it permanently reproducible with the reconstructed environment of the original one, or the replication with emulating the surface of the original one — they should make sterile judges if it's "true" or not on every each reproduction sample of glitch.
In such a situation, who are going to make a glitch art have to stand on the double bind state.
Anyway I just try to exhibit the glitch.
In the movie's first sequence, I feature the compression technology MPEG-4 Visual that is defined in the spec of MPEG-4.
The glitches have differents by the comparable or derivative codec implementations though, I chose here FMP4 that is implemented in ffmpeg.
The basic technique of movie compression is to drop the overlap pixels between frames. At first there is a keyframe which has all the pixels in the resolution, as a static image. And the following frame refers the previous frame, it only has information of the changed pixels. Dosen of such difference frames appear and then a keyframe shows up again. The digital movie is made with this iteration. Present common codecs of almost all digital movies have same structure. (Though the codecs recently developed take the differences from both previous and following frames.) The periodic insertion of a keyframe is for capability to restore the playing even if the movie had encountered a lack of frame data. Because of such the restoration mechanism, the blind glitch against movie disappears in a moment. Therefore generally the glitch for movies should remove all the keyframes — the toehold of restoration. The movie which lost keyframes and this technique are called datamoshing.
The datamoshing movie consists of only difference frames. The difference frames contain the information of colors and moves. The information is like a particular pixel moving two pixels right and getting a little bluey from the previous frame. The reproducer system processes such the information mechanically, it continues playing as far as possible even if there was no keyframes or frames got incorrect order. The reproducer doesn't know if the movie is corrupted or it ignores the error as is handled internally.
Now I choose the method of glitch to repeat a difference frame copy many times. Then the frames get separated from the context of the footage and become impossible to recognize what was the proper shot in the screen, and the unintended movie will be generated. Repeating difference frames, a particular pixel will repeatedly move two pixels right and get bluey. The move and color of the pixel that is not contained in the original movie now appear. In this effect, it doesn't even have object's broken shapes which were shown in a direct datamoshing. In a first look, we might not be able to recognize if this is caused by a glitch.
There is no more general movie expression. Now just the quality of the difference frame is shown. It is nothing more than the appearance of an only element which the digital movie consists of. What I made and screen here is same as showing the burnability of turpentine oil with burning it.
Second sequence of this movie, the lossless compression codec Huffyuv is treated.
As lossless compression it is made of only keyframes, and each frame is compressed with an algorithm using Huffman tables and prediction functions for neighboring pixels.
By the way, an image or movie data must have the whole pixels data sized in accordance with its resolution though, what happens when the reproducer gains less data than the expected resolution? In the case of an image like JPEG or GIF — basically the data is processed from top-left to bottom-right like English text — then the upper area of the image is shown correctly, and the lacking bottom part is displayed as gray plain empty. (It depends on the viewer application.)
In Huffyuv codec, it is slightly strange. To remove almost data in each frame and play, the upper thin part is shown correctly like JPEG and GIF but the empty area is filled with the vivid colors and crazy contrast. Although there are effects from the upper correct area, it has totally no meaning for people. The beholders won't even find the emptiness in it. Here the unintended colors and patterns which the original data doesn't have is also newly generated.
When the emptiness is given to the reproducer, the decode process according to the compression algorithm produces the fullness that quite far from the proper purpose and feature.
It seems that the computer plays a nonsense association game like extracting the word "internet" from "turpentine".
The pixels which get appeared on there shouldn't exist actually. It looks like as if the unconsciousness of the reproducer system comes out.