Talk:Framebuffer

From Gentoo Wiki
Jump to:navigation Jump to:search
Note
Before creating a discussion or leaving a comment, please read about using talk pages. To create a new discussion, click here. Comments on an existing discussion should be signed using ~~~~:
A comment [[User:Larry|Larry]] 13:52, 13 May 2024 (UTC)
: A reply [[User:Sally|Sally]] 12:42, 5 November 2024 (UTC)
:: Your reply ~~~~

What exactly is a frame buffer?

Talk status
This discussion is still ongoing as of 2022-01-24.

Isn't there more than one meaning for the word "frame buffer" or "framebuffer"?

Every graphics card has a framebuffer, which is how it used to scan through the video memory and refresh the analog screen, by scanning through it and again and again drawing the lines on the screen at the given refresh rate.

And then there is the framebuffer console (fbcon), which is a framebuffer for the (text) console of Linux itself. It is how Linux "draws" the screen – it is an additional layer between the console and the graphics driver. Yes?

And then we have framebuffer graphics drivers, which are some sort of very simply methods of drawing (unaccelerated) graphics onto the screen. Such drivers mostly use a standardized (yet simple, as-well) interface like VESA or the EFI framebuffer GOP or UGA modes. Unlike fbcon this can be used for X11 as well (not a text console, not fbcon), but is suitable only as a fallback due to its very slow speed.

And then we have a "real" graphics driver, accelerated and all, with OpenGL/Vulkan support, and it may internally use a framebuffer as well, to organize the picture which it wants to be drawn on the screen. Once this framebuffer gets transferred (or "selected", e.g. by pointer to the memory region) to the VRAM, its then displayed, but double- and triple-buffering means that not all of the framebuffer is displayed at all times (simultaneously).

Aren't those cases four different kinds of framebuffers? Or at least framebuffers at entirely different levels?

Which of the above it this article for?

Luttztfz (talk) 12:50, 24 January 2022 (UTC)

I prefer the term pixel buffer or screen buffer to describe the region of memory that holds the image being used to refresh the display surface. There are actually several of these and ideally, they are switched at the vertical retrace interval. The idea of having several (at least two) is that one can be used to refresh the displayed image while the second has the next image drawn into it. This avoids video tearing artefacts caused by updating the image while you watch. Curious users can disable multiple pixel buffers to see the tearing. Its ugly but otherwise harmless.
The historical text console has long gone. It was a 1kB region of RAM that held the ASCII codes for the glyphs to be drawn on screen. On readout, the ASCII code and the line number being drawn within the symbol were applied to a look up table (LUT) to get the actual pattern of dots to put on the screen for that particular line of that particular character. Different LUTs produced different fonts. I'll gloss over the way that effects like colours were achieved. Its more of the same. There is no pixel buffer with this system. The pixels to be displayed are generated 'on the fly'. Graphics, like the Tux logo, are not possible. The current VGA driver has been a framebuffer for a long time now. The entire screen image exists in some RAM somewhere. Its just an implementation detail.
The Linux graphics driver stack is built up from a number of layers. Some in the kernel, as illustrated in this page and some in userspace, like x11-drivers/xf86-video-*
Its not always true to say that there is no acceleration with the kernel framebuffer drivers. Once upon a time, the graphics card was a chunk of RAM and a digital to analogue converter. The CPU had to plot every pixel in graphics mode. In text mode, it used the LUT described above. When GPUs were added, they could draw graphics primitives, like lines, rectangles, arcs, and so on and even do colour fills. VESA compliant cards do all that with a VESA compliant API. That's partly why there are so many old drivers that need to be turned off. Different vendors made their own APIs, so needed their own drivers.
There is no concept of 3D at this level. That gets added in userspace. Even then, what gets into the pixel buffer is a 2D 'slice' of the rendered 3D scene.
The userspace parts of the graphic stack call on the kernel parts, so the layers of the cake have to fit together.
This page is about getting the kernel in good shape to provide a working console and at the same time, choose correct settings for the userspace parts of the stack that will be added later. It also covers ensuring that the things that will stop the kernel part of the graphics stack working are set off in the kernel.
A framebuffer driver is a piece of software that renders a complete image into a pixel buffer that will be used (or is being used) to update the display surface image. The term says nothing about off loading any of the work from the CPU. NeddySeagoon (talk) 19:24, 24 January 2022 (UTC)
Seems suitable at this point to put this article in the Meta category, since it is dealing with framebuffer in general. Thanks for the enhancements, Roy Bamford (NeddySeagoon) . --Maffblaster (talk) 23:51, 24 January 2022 (UTC)
The intent was to have somewhere to point new to DIY kernels users to that come to the forums with consoles showing black text on a black background. That is, its enough to configure the kernel correctly so that that doesn't happen and users get the /dev/dri nodes that they will need later. NeddySeagoon (talk) 13:30, 25 January 2022 (UTC)
Thanks for elaborating.
What you call LUT, isn't this also called character generator? This saves video memory, because, as you said, initially only ASCII (or any other encoding) was stored, which is way less than using graphics, which requires to draw every pixel in video memory. And it was way faster in the old days, and it was called "text mode". Some systems, namely the Macintosh or the Atari didn't have a text mode, they only had a graphics mode. This is why, e.g., in Open Firmware on a Power Mac the text output is very very slow, because it is building a text mode in software on top of the graphics mode in hardware.
But I've read that all of this – text mode, graphics mode, Video RAM – was also called a "frame buffer" from a technical perspective...
Linux on a hardware platform like a PC can use text mode and doesn't need an extra (internal) framebuffer (i.e. it would use VGA console).
Linux on hardware like a Macintosh needed some interface to build a text console, because it is non-existent in hardware. Isn't this how fbcon came to be?
For the article: I think there should be a clearer distinction and explanation towards which type of framebuffer is which. Doesn't fbcon exist on top of, say vesafb or nvidiafb or efifb or rivafb or radeonfb and so on...
And doesn't x11-drivers/xf86-video-fbdev also use the underlying framebuffer for its output, e.g. the ones I just listed as example, like vesafb?
As for the console: according to /usr/src/linux/Documentation/driver-api/console.rst, there are two types of consoles: (S) the system console (S for system driver: only one is possible, and it cannot be unloaded, just deactivated), and (M) an unlimited number of modular consoles (M for modular driver).
# grep . /sys/class/vtconsole/*/name
/sys/class/vtconsole/vtcon0/name:(S) dummy device
/sys/class/vtconsole/vtcon1/name:(M) frame buffer device
On my system, the system console is called "dummy device". On a classic (ancient) PC with BIOS it would probably be called "VGA+" or so (according to the documentation).
In /usr/src/linux/Documentation/fb I read through framebuffer.rst and fbcon.rst, but I don't understand all of it. Specifically I don't understand how Linux framebuffer drivers really work, how consoles are part of this (text consoles can also be used over serial connections, i.e. serial consoles) and how X11 uses it, especially how graphics drivers and DRM, DRI and KMS interacts with it (if it does).
In theory it would be possible to use a Linux system that loads a dummy (non-displaying) console (no text output at all) and use a graphics driver directly to load X11. There wouldn't be a Linux framebuffer involved, if the graphics driver doesn't use the Linux fb implementation.
And then, it would also be possible to use a text console like the VGA console, also without the fb implementation (which was the standard case in very early Linux, like, I assume, the Linux kernel 0.x).
Again, I don't understand what a "Linux framebuffer driver" really is (not the general term "frame buffer", but the meaning specific to Linux) and what I need it for. And why I should deselect/disable the old framebuffer drivers?!?
On an old Power Mac from the late 1990ies, aty128fb was the only way to get a text console, establishing a Linux framebuffer... But what does this mean? ::Luttztfz (talk) 09:43, 25 January 2022 (UTC)
I found this: https://tldp.org/HOWTO/html_single/Framebuffer-HOWTO/
According to it, the Linux framebuffer is an abstraction that allows a stable generic interface across all platforms. That makes sense.
Luttztfz (talk) 10:07, 25 January 2022 (UTC)
The important points there are paragraph 2 and the opening statements of paragraph 3. The framebuffer (software) is an abstraction of the pixel buffer (hardware).
The bottom layer is the pixel buffer in hardware. Drawing into the pixel buffer is the job of the frame buffer driver. No layers further up need to know how to get the best out of the hardware. The frame buffer drivers present a uniform interface the the next layer up, whatever that is. Maybe nothing any more.
It's got more confusing of late as the DRM layer mostly includes its own frame buffer drivers that know all about the hardware too. Hence dmesg shows the kernel starting off with the driver that starts first, then switching to more capable drivers, possibly several times. The kernel framebuffer drivers (on the framebuffer menu) may only be used for a few seconds at startup.
There is an exception to "The kernel framebuffer drivers (on the framebuffer menu) may only be used for a few seconds at startup." too. nvidia-drivers does not provide a console driver, so Ctl-Alt-Fn switching switches back to whatever was driving the console before the GUI started.
While the technical nitty gritty that we are both familiar with is interesting, the target audience of this page need not be aware of it. They just want to not get a blank console and get support for /dev/dri. Many of them don't want to lose the console messages that appear before a modular driver is loaded too. NeddySeagoon (talk) 13:30, 25 January 2022 (UTC)