Ameba Ownd

アプリで簡単、無料ホームページ作成

cietabmabo1986's Ownd

Interlace games

2022.01.17 02:00




















Some people don't like the blurry feel of this pseudo i mode, they made an RGB mod that doubles the scanlines to a p mode. Here is the interesting thing.


Switching it to p mode, scanlines were doubled and pixels were clearer, but seemed from my square non-flicker test to still blend two frames in one so possibly still 30hz. But recently I acquired a retrotink 2x upscaler and the same test seem to do 60fps with squares flicker and movement faster. Later, I tried to play Super Street Fighter 2 on the machine and indeed it feels quite smoother.


Previously, I thought I would recognize the difference between 30 and 60 PAL is even worse, but couldn't guess it, maybe the mix of the scanline and upscale to with corner interpolation smoothens the animation? But surprsingly, the retrotink 2x improves to the proper 60fps feel p is cool, as the i wasn't a real highres resolution, but hardware interpolation with limited control from software high bits in pixels and other parameters are supposed to control the 2x2 corner weights, but with few tests I couldn't make them change, need to do more research.


VGA is not intended to do p either, but these adapters are designed to be used with monitors that must do p. Just found your article after recording a video of Ehrgeiz: God Bless the Ring for the PlayStation and wracking my brain trying to figure out why the heck the video was interlaced.


I didn't even know an interlaced mode existed on the PSX hardware. Great read, thanks for putting the article together! However, in the quest to gain greater graphical detail without severely impacting performance, game programmers began to use interlaced video modes in the fourth and fifth generation of video game consoles.


Then in the sixth generation, interlacing was the norm and progressive scan was the option. By the seventh generation, HD gaming was the norm and interlaced graphics usage was more or less here to stay. Let's explore the issues surrounding interlaced video game graphics here. Standard definition interlacing, i is almost identical to the TV's perspective to p. The key difference is that in i mode, a delay of half a scanline which tells the TV to draw an even frame or an odd frame. Ask the Editors 'Everyday' vs.


What Is 'Semantic Bleaching'? How 'literally' can mean "figuratively". Literally How to use a word that literally drives some pe Is Singular 'They' a Better Choice? The awkward case of 'his or her'. New Year, Recondite Vocabulary Take the quiz. Advanced Vocabulary Quiz Tough words and tougher competition. Take the quiz. Name That Thing Test your visual vocabulary with our question A cinema frame is displayed on the screen once and in al regions almost at the same time, but in TV a dot paints the picture line by line on the screen.


Doing this without any additional measure would result in a quite flickery picture, despite a frame rate equal or better than cinema. First stept: Make the screen disperse the beam energy over time aka afterglow , so lit parts stay longer lit. This is, BTW, the similar what makes pure 24 fps cinema flickery, like known from old silents. Drawing a revived picture twice would have worked as great on TV, except a CRT got no storage, so it had to be send more often than 25 30 times per second.


So far a nice solution. A picture with horizontal lines is quite fine for TV and would satisfy all needs. Producing TV content at 50 60 Hz frames would have been no issue. But then there were movies. They were produced on 24 Hz, so transmitting every picture twice would work quite as good as genuine TV production, but waste about half the bandwidth. Thanks to the wonders of human sight movies now used the 50 60 fps to display their 24 fps in higher resolution, while genuine TV content got real 50 60 fps in kind o an enhanced resolution.


Because that's what the TV set expects as input and is able to display. A classic, CRT TV set does not combine two 'half' pictures into a single one to be displayed at double fps, but displays each 'half' on it's own, the second offset by a line. It's the way a CRT hardware works. Because games need to be compatible to the TV hardware and standards out in the field.


It's not a great idea to produce a console that is can only work with the very latest displays - at least not if one intends to sell more than a few. Not really, rather the other way around, as a console game is able to adjust. Keep in mind, i or p does only tell part of the story, as that number only names a frame size and structure, not the frame rate. Interlaced will usually create 50 60 fps, while the same as progressive can be 25 30 or 50 60 fps.


Oh, and of course games could go ahead and do fast content with high resolution - with the same drawback as TV content had with motion artefacts. To be exact: "Interlacing" is not just a method for bandwidth saving, but mainly for increasing vertical resolution. EDIT: Bandwidth saving and increasing resolution are just the different sides of the same coin , see a comment by Justme.


The first picture frame contains the odd lines 1, 3, 5, Two frames give one full image, so the European TV has 50 frames per second or 25 full images per second.


Video games, as well as home computers, did not use interlacing, they generated only "odd" picture frames, not respecting neither interlacing nor shifting. Old TVs were more robust and tolerant to the video signal, so they can handle such "non-standard" signals with no big problems. So video games have got 50 pictures per second, but only possible lines to display.


In fact, less, because some of them are outside the viewable area. The main reason to use interlacing in a video game could be increasing the vertical resolution up to approx. On the other side, interlacing has some drawbacks, as image flickering. Those lines were not fully visible, only the lines were visible, so "i". In fact, there was no "p" mode in the 80s', so technically it WAS i, but used as "p" CRT TVs were designed to handle interlaced signals, where the TV alternates by receiving odd scanlines and even scanlines on alternating frames.


The so-called progressive mode was invented when some hardware designer noticed you could start the even and odd frames on the same scanline, so the CRT's electron gun overwrites the previous frame's lines instead of drawing between them.


This doubles the frame rate and halves the vertical resolution, both of which were useful for game machines since there usually wasn't storage space for high-res graphics. This is also why old game consoles exhibit the scanline effect, where there are little gaps between rows of pixels.


Interlaced signals don't have this problem because those gaps get filled on the subsequent frame, at the expense of making the image more jittery an effect known as interline twitter , which is usually smoothed out by applying a blur, but video games didn't generate such blur.


So that's another reason games used progressive scan. The image was sent to the display the TV using an RF modulator. This essentially acts as a low power TV station. Since the TV expects broadcast channels to be interlaced, the signal sent from the RF modulator must be interlaced as well. You alluded to this in your question: Reduced bandwidth.


By only displaying every other line, the number of pixels that need to be drawn per frame is cut in half. As stated many times by Justme, CRT TVs are fairly tolerant toward signals which deviate from the exact broadcast timing standards. There are limits to this tolerance, of course, but the slight deviations used by TV-connectable home computers and video game consoles fall well within those limits. All in all, analog signal inputs are now disappearing from new TVs. The rest of the scanlines are spent in vertical blanking, waiting for the electron beam to move back to the top of the screen.


Scanlines belonging to the vertical blanking period are not shown on screen and may carry data, such as closed captions. In these systems, the vertical blanking period typically carried Teletext content. The European 50 Hz systems implemented color in a different way and did not require such adjustment to the field rate. While you could generate video imagery in this specific way, this is not how video cameras do it. Video cameras TV cameras have always recorded motion in each pass, giving interlaced video the motion resolution of 60 Hz Broadcast SD TV video is just a succession of alternating odd and even fields.


In fact, the old, video tube-based TV cameras preceding CCDs recorded motion not only at field intervals but as continuous scans across the scanline raster, in the exact same scanning pattern as how the electron beam in the CRT-based TV receivers drew the images on the CRT screen.


If a vertically-oriented rod passed the camera field of view in the horizontal direction during a single video field, the old-timey TV camera would have captured a distorted diagonal image of it. On the other hand, the electron guns of the CRT TVs displaying these images from a live studio show were, at each moment, essentially synchronized to the scanning image capture of the tube-based TV cameras in the studio — which is kind of neat, if you think about it.


Old TV-connectable computers and game consoles avoided interlacing probably for a multitude of reasons: p was a better match for their technical capabilities limited video memory , a non-broadcast-standard signal was simpler to generate, and interlacing is not all that desirable in the first place as it makes all the horizontal lines noticeably jittery and tiresome to look at for computer use a major headache e. The jitteriness of an interlaced signal is a big issue especially when you have a limited palette limited number of simple primary colors with no shades to smooth things out and need to display non-natural, static, crisp imagery such as text and charts , and have no advanced video processing or filtering capability.


They were marketed as being more ergonomic than standard monitors. But as you can guess, slow phosphors only suited relatively static screen content as they caused moving objects to have a noticeable motion blur trail behind them.


Also, scrolling any content on a slow-phosphor monitor was a pain. It was better to browse text files one screenful at a time. As for TV-connectable computers and computer-generated interlaced signals, the Amiga used to be one of the rare exceptions of its era, offering broadcast standards-compatible interlaced modes and genlock capability capability of synchronizing to an external video signal. But even on this system, the interlaced modes were primarily used for video titling applications, not much else.


The Xbox also had a GPU capable of accelerated 3D graphics and shading with millions of colors, so the imagery it generated was closer to that produced by video cameras.


The typical applications games also did not rely on the user having to intently watch static text screens or charts for long periods of time so the usage was different from that of a TV-connectable generic-purpose computer.


Some commentators assume p to have been inferior to an interlaced signal due to the scanlines of a non-interlaced signal allegedly having these huge, unsightly gaps between them. However, it would be a mistake to assume that a non-interlaced signal on a CRT TV would look anything like an image on an LCD screen with every second line missing.


The images drawn by an electron beam on the CRT phosphors through a shadow mask are not tidy LCD pixel rows with exact dimensions or borders: the electron beam that draws the scanlines has a certain spread. Also, the brighter the image content is, the more the beam will spread.