Gentoo Myths Busted

From Gentoo Wiki
Jump to:navigation Jump to:search
Important
This is not a help page. It's a don't do these things unless you have a good reason to page. Once upon a time they way have been 'verygoodthngs' but no more
Tip
Style for this page only. Heading level 2 is the myth. The following paragraphs explain why its now a myth.
Warning
Help, if any, should be included by reference, or this page will be added to myths as the help goes out of date and and passes into Gentoo mythology.

Introduction

Like the IBM PC (launched 1981) and its offspring Gentoo (Oct 1999) has got old and gnarly now. See the Gentoo History Project.

Try your hand at installing Gentoo as it was in 2003

Like lore everywhere, this ageing process has given rise to beliefs and reconsolidations that have their origins in some old part truths but are no longer true today with modern hardware. Some were never true.

In no particular order yet ...

Putting PORTAGE_TMPDIR into tmpfs Speeds Up Builds

That has not been true for many years now, even if it ever was.

tmpfs is the kernel disk cache but with no permanent backing store. The kernel will serve everything from the cache, without reading the HDD if it can. It follows that if you have the RAM for PORTAGE_TMPDIR in tmpfs the kernel is already doing it.

The main point in favour of PORTAGE_TMPDIR in tmpfs is that it saves writes that will never be read. That may be considered a good thing by users who are worried about SSD writes wearing out the SSD.

The detail discussion can be found in TMPDIR in tmpfs

Not Having a Swap Partition/File Prevents the Kernel Swapping

False. The Swap Partition/File is only used for contents of dynamically allocated RAM. That includes the content of tmpfs.

The kernel has several other thing it can do to 'swap'. Not having even a small swap space, that may never be used, removes an indicator to pressure on RAM. Use of a small amount of swap, particularly after a long uptime is normal.

The kernel does not load anything into real physical RAM until its required, its mmapped. When something is needed but not present, a page fault is generated and the CPU does something else while the page is loaded.

This loading on demand means that pages can be dropped and reloaded as needed. That's swapping without using swap space.

Clean pages, that do not need to be written to disk can just be dropped and reloaded later

Dirty pages, not yet committed to HDD, must be written before they can be dropped. Note that 'dirty pages' are never written to swap.

An interesting side effect of this mmapping is that the kernel can execute programs that are too big to fit into RAM as several virtual memory pages can be mapped to the same physical page.

Using -O3 in make.conf Speeds Up Execution Times

The clue here is in the name -O for Optimise. That means trade off. While it will be optimum for some things, it will be sub optimal for others

Premature optimisation is the root of all evil.

All computers wait at the same speed. Do you really care if $EDITOR waits longer between your keypresses?

You may want to squeeze the pips on QEMU but this may well not be the way to do it.

-O3 makes the code bigger to try to reduce execution time. That's OK as long as the bigger code does not displace wanted code in the CPU cache, or if it does, the prefetch takes care of potential cache misses before they happen. It's all very cache size dependent and requires to be benchmarked on a case by case basis. Probably on a function by function basis.

Intel/AMD Only

These systems have an interesting 'bug' called the Excess Precision Bug, inherited from the 8087. The FPU internally works to 80 bits but RAM and other floating point data are only 64 bits. Thus different -O levels give different floating point results for the same source code. It all depends on how intermediate results are passed. In FPU registers or in RAM.

MAKOPTS threads+1

Using MAKEOPTS="thread+1" (or similar) was a common practice to ensure that there is always one more job than the number of available CPU threads. This approach used to help keep the CPU busy back in the single core days, even if some jobs are waiting for I/O operations or other resources. Nowadays in our multi core world this does more harm than good so the 2GB of RAM per thread rule of thumb should be applied.

[blogs.gentoo.org/ago/2013/01/14/makeopts-jcore-1-is-not-the-best-optimization/|Demonstration].