The real question is: do you see swap activity? Run vmstat 1
for a little while, and as long as the si
and so
columns stay all zeros, then you're good, there's no swap activity.
If you know you never want to swap, then don't enable it. Otherwise consider
swappiness to be an advice to the Linux kernel, but don't rely on it preventing swap altogether just because you've set it to zero. See for instance [1] on LWN – it's a little dated now, but it shows just how much the behavior of the Linux kernel can change across versions.
Whether or not to disable swap altogether is still regularly subject
The real question is: do you see swap activity? Run vmstat 1
for a little while, and as long as the si
and so
columns stay all zeros, then you're good, there's no swap activity.
If you know you never want to swap, then don't enable it. Otherwise consider
swappiness to be an advice to the Linux kernel, but don't rely on it preventing swap altogether just because you've set it to zero. See for instance [1] on LWN – it's a little dated now, but it shows just how much the behavior of the Linux kernel can change across versions.
Whether or not to disable swap altogether is still regularly subject to hot debates, but my stance is to not enable it for user-facing production systems. There's nothing worse than a system that's half-working. I'd much rather have the OOM killer wreak havoc and take memory hogs down than have my servers crawl under load and serve with high latency and timeouts. StumbleUpon runs entirely without swap. Google went one step further: they don't even compile support for swap in their production Linux kernels.
Of course there is no one true answer, it depends on the specifics of your environment.
[1] "swapping and the value of /proc/sys/vm/swappiness" http://lwn.net/Articles/100978/
A few guesses:
- Maybe your system previously was under more memory pressure, which caused a bunch of memory to get swapped. When the pressure went away, the now-free memory was used by buffer cache, but the previously swapped data hasn't been accessed since then. Thus, it remains in swap even though it could be brought back into RAM.
- Maybe you have a large amount of data stored in /dev/shm/ or another tmpfs mount. These get accounted under "Cached" memory as well, but can't be swapped out or discarded at will.
- Maybe some processes have mlocked some mapped files. I think this would also be
A few guesses:
- Maybe your system previously was under more memory pressure, which caused a bunch of memory to get swapped. When the pressure went away, the now-free memory was used by buffer cache, but the previously swapped data hasn't been accessed since then. Thus, it remains in swap even though it could be brought back into RAM.
- Maybe you have a large amount of data stored in /dev/shm/ or another tmpfs mount. These get accounted under "Cached" memory as well, but can't be swapped out or discarded at will.
- Maybe some processes have mlocked some mapped files. I think this would also be accounted under Cached memory, but couldn't be swapped out. Try running: sudo grep VmLck /proc/*/status | grep -v '0 kB'
Where do I start?
I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.
Here are the biggest mistakes people are making and how to fix them:
Not having a separate high interest savings account
Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.
Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.
Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of th
Where do I start?
I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.
Here are the biggest mistakes people are making and how to fix them:
Not having a separate high interest savings account
Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.
Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.
Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of the biggest mistakes and easiest ones to fix.
Overpaying on car insurance
You’ve heard it a million times before, but the average American family still overspends by $417/year on car insurance.
If you’ve been with the same insurer for years, chances are you are one of them.
Pull up Coverage.com, a free site that will compare prices for you, answer the questions on the page, and it will show you how much you could be saving.
That’s it. You’ll likely be saving a bunch of money. Here’s a link to give it a try.
Consistently being in debt
If you’ve got $10K+ in debt (credit cards…medical bills…anything really) you could use a debt relief program and potentially reduce by over 20%.
Here’s how to see if you qualify:
Head over to this Debt Relief comparison website here, then simply answer the questions to see if you qualify.
It’s as simple as that. You’ll likely end up paying less than you owed before and you could be debt free in as little as 2 years.
Missing out on free money to invest
It’s no secret that millionaires love investing, but for the rest of us, it can seem out of reach.
Times have changed. There are a number of investing platforms that will give you a bonus to open an account and get started. All you have to do is open the account and invest at least $25, and you could get up to $1000 in bonus.
Pretty sweet deal right? Here is a link to some of the best options.
Having bad credit
A low credit score can come back to bite you in so many ways in the future.
From that next rental application to getting approved for any type of loan or credit card, if you have a bad history with credit, the good news is you can fix it.
Head over to BankRate.com and answer a few questions to see if you qualify. It only takes a few minutes and could save you from a major upset down the line.
How to get started
Hope this helps! Here are the links to get started:
Have a separate savings account
Stop overpaying for car insurance
Finally get out of debt
Start investing with a free bonus
Fix your credit
Awesome question.
I dove into the inner workings of swap as part of a Linux kernel spelunking effort a few years back, so I can comment.
Mostly it's because vm.swappiness is a HINT to the kernel - it indicates your desire, but it is neither a limit nor a specific multiplier.
http://www.linuxvox.com/2009/10/what-is-the-linux-kernel-parameter-vm-swappiness/
So, as a hint, the kernel can pretty much say "thanks but no thanks" and do what it wants to do, when it knows for a fact that it knows best regarding the matter.
Swapping occurs for very good reasons. You might google for "linux why swap is goo
Awesome question.
I dove into the inner workings of swap as part of a Linux kernel spelunking effort a few years back, so I can comment.
Mostly it's because vm.swappiness is a HINT to the kernel - it indicates your desire, but it is neither a limit nor a specific multiplier.
http://www.linuxvox.com/2009/10/what-is-the-linux-kernel-parameter-vm-swappiness/
So, as a hint, the kernel can pretty much say "thanks but no thanks" and do what it wants to do, when it knows for a fact that it knows best regarding the matter.
Swapping occurs for very good reasons. You might google for "linux why swap is good" - one interesting read on this is:
http://rudd-o.com/linux-and-free-software/why-swap-on-linux-is-always-good-even-with-tons-of-ram
Your question makes me want to pose a counter-question: If your system is operating acceptably, why are you worried about swapping?
Your free output shows 65094 free in buffers/cache and only 5589 used in swap. This looks quite healthy to me.
If your curiosity is insatiable possibly this will help:
http://northernmost.org/blog/find-out-what-is-using-your-swap/
...if you run this, please share what you learn from it.

When vm.swappiness
is set to 0
, it indicates that the Linux kernel should avoid swapping out pages unless absolutely necessary. However, even with this setting, there are situations where the kernel might still swap out pages, including cached pages. Here are some reasons why this might happen:
- Memory Pressure: If your system is experiencing memory pressure (i.e., the available RAM is low), the kernel may still decide to swap out pages to free up memory, even with
swappiness
set to0
. This is especially true if there are active processes that need memory and the system cannot reclaim enough fro
When vm.swappiness
is set to 0
, it indicates that the Linux kernel should avoid swapping out pages unless absolutely necessary. However, even with this setting, there are situations where the kernel might still swap out pages, including cached pages. Here are some reasons why this might happen:
- Memory Pressure: If your system is experiencing memory pressure (i.e., the available RAM is low), the kernel may still decide to swap out pages to free up memory, even with
swappiness
set to0
. This is especially true if there are active processes that need memory and the system cannot reclaim enough from other sources. - Cache Management: The Linux kernel uses a complex memory management strategy. Cached pages can be swapped out if the kernel deems it necessary to maintain performance for active processes. Cached data is not as critical as data in active use, so the kernel might choose to free up that memory.
- Reclaiming Memory: The kernel's memory management subsystem actively manages memory, including reclaiming cached pages. If it needs to allocate memory for new or active processes, it may swap out cached pages to make room.
- Page Replacement Policy: The kernel employs various algorithms to manage memory. Even with low
swappiness
, if the kernel's algorithms determine that certain cached pages can be swapped out without significantly affecting performance, it may proceed to do so. - Kernel Behavior: The behavior of the kernel can also depend on other parameters and settings. For instance, if the system is configured to prioritize certain types of memory usage, the kernel may swap out cached pages to accommodate those priorities.
Recommendations
- Monitor Memory Usage: Use tools like
free
,top
, orhtop
to monitor memory usage and understand how much memory is being used for cache versus other processes. - Adjust Swappiness: If you find that swapping is still occurring frequently and affecting performance, you could experiment with different
vm.swappiness
values (e.g., setting it to1
or10
) to see if it improves performance while still allowing some degree of swapping. - Increase RAM: If your workload consistently runs out of memory, consider adding more RAM to the system. This can reduce the need for swapping and improve overall performance.
- Optimize Applications: Look into optimizing the applications running on your system to reduce memory usage, which may also help alleviate swapping.
By understanding these dynamics, you can better manage your system's memory and performance.
Todd, It was /dev/shm being used by Sybase I had no idea memory that got locked into /dev/shm would be reflected as cached memory when looking at free this seems kinda silly? Its not quite obvious.
I once met a man who drove a modest Toyota Corolla, wore beat-up sneakers, and looked like he’d lived the same way for decades. But what really caught my attention was when he casually mentioned he was retired at 45 with more money than he could ever spend. I couldn’t help but ask, “How did you do it?”
He smiled and said, “The secret to saving money is knowing where to look for the waste—and car insurance is one of the easiest places to start.”
He then walked me through a few strategies that I’d never thought of before. Here’s what I learned:
1. Make insurance companies fight for your business
Mos
I once met a man who drove a modest Toyota Corolla, wore beat-up sneakers, and looked like he’d lived the same way for decades. But what really caught my attention was when he casually mentioned he was retired at 45 with more money than he could ever spend. I couldn’t help but ask, “How did you do it?”
He smiled and said, “The secret to saving money is knowing where to look for the waste—and car insurance is one of the easiest places to start.”
He then walked me through a few strategies that I’d never thought of before. Here’s what I learned:
1. Make insurance companies fight for your business
Most people just stick with the same insurer year after year, but that’s what the companies are counting on. This guy used tools like Coverage.com to compare rates every time his policy came up for renewal. It only took him a few minutes, and he said he’d saved hundreds each year by letting insurers compete for his business.
Click here to try Coverage.com and see how much you could save today.
2. Take advantage of safe driver programs
He mentioned that some companies reward good drivers with significant discounts. By signing up for a program that tracked his driving habits for just a month, he qualified for a lower rate. “It’s like a test where you already know the answers,” he joked.
You can find a list of insurance companies offering safe driver discounts here and start saving on your next policy.
3. Bundle your policies
He bundled his auto insurance with his home insurance and saved big. “Most companies will give you a discount if you combine your policies with them. It’s easy money,” he explained. If you haven’t bundled yet, ask your insurer what discounts they offer—or look for new ones that do.
4. Drop coverage you don’t need
He also emphasized reassessing coverage every year. If your car isn’t worth much anymore, it might be time to drop collision or comprehensive coverage. “You shouldn’t be paying more to insure the car than it’s worth,” he said.
5. Look for hidden fees or overpriced add-ons
One of his final tips was to avoid extras like roadside assistance, which can often be purchased elsewhere for less. “It’s those little fees you don’t think about that add up,” he warned.
The Secret? Stop Overpaying
The real “secret” isn’t about cutting corners—it’s about being proactive. Car insurance companies are counting on you to stay complacent, but with tools like Coverage.com and a little effort, you can make sure you’re only paying for what you need—and saving hundreds in the process.
If you’re ready to start saving, take a moment to:
- Compare rates now on Coverage.com
- Check if you qualify for safe driver discounts
- Reevaluate your coverage today
Saving money on auto insurance doesn’t have to be complicated—you just have to know where to look. If you'd like to support my work, feel free to use the links in this post—they help me continue creating valuable content.
The page cache caches pages of files to optimize file I/O. The buffer cache caches disk blocks to optimize block I/O.
Prior to Linux kernel version 2.4, the two caches were distinct: Files were in the page cache, disk blocks were in the buffer cache. Given that most files are represented by a filesystem on a disk, data was represented twice, once in each of the caches. Many Unix systems follow a similar pattern.
This is simple to implement, but with an obvious inelegance and inefficiency. Starting with Linux kernel version 2.4, the contents of the two caches were unified. The VM subsystem now dr
The page cache caches pages of files to optimize file I/O. The buffer cache caches disk blocks to optimize block I/O.
Prior to Linux kernel version 2.4, the two caches were distinct: Files were in the page cache, disk blocks were in the buffer cache. Given that most files are represented by a filesystem on a disk, data was represented twice, once in each of the caches. Many Unix systems follow a similar pattern.
This is simple to implement, but with an obvious inelegance and inefficiency. Starting with Linux kernel version 2.4, the contents of the two caches were unified. The VM subsystem now drives I/O and it does so out of the page cache. If cached data has both a file and a block representation—as most data does—the buffer cache will simply point into the page cache; thus only one instance of the data is cached in memory. The page cache is what you picture when you think of a disk cache: It caches file data from a disk to make subsequent I/O faster.
The buffer cache remains, however, as the kernel still needs to perform block I/O in terms of blocks, not pages. As most blocks represent file data, most of the buffer cache is represented by the page cache. But a small amount of block data isn't file backed—metadata and raw block I/O for example—and thus is solely represented by the buffer cache.
See also my answer to What is the difference between Buffers and Cached columns in /proc/meminfo output?
Yes, even when swapping is disabled, the kernel will perform demand paging. Moreover, even if swap is disabled, a Linux system will still page clean file data back to the backing files.
This is intentional and very, very good for performance. In the rare situations where you need predictable latency, your process should call mlockall(MCL_CURRENT | MCL_FUTURE)
to pre-fault and lock into RAM all of its pages.
This is a very complicated question.
Just as an example — the caching of NFS/SMB/File-system data once read — … because these features were put in — to speed up certain types of jobs and typical usage. Virtual memory was designed because processes typically have a lot of initialization code before they go into a loop doing thing. The memory that is no longer used — need not physically need to be used. This leads to swap area or the ability to get it from it’s original place if it really has not changed — like read-only data or read-only code.
Being able to quickly change the virtual memory from
This is a very complicated question.
Just as an example — the caching of NFS/SMB/File-system data once read — … because these features were put in — to speed up certain types of jobs and typical usage. Virtual memory was designed because processes typically have a lot of initialization code before they go into a loop doing thing. The memory that is no longer used — need not physically need to be used. This leads to swap area or the ability to get it from it’s original place if it really has not changed — like read-only data or read-only code.
Being able to quickly change the virtual memory from one process to another led to page tables.
If there were no virtual memory — then the processes would run faster. All the overhead for that would not need to occur. BUT — then you would need to compile everything needed and a multi-process or general purpose computer would be more difficult to create.
You will need to learn a lot of about the terminology used in the question to be able understand the question(s). Like — what are you trying to optimize — processor performance, memory usage, etc,
This question would take me hours and hours to answer … and it is different for different processors and even if the processor has a memory management unit.
Why is this question being asked?
DEC's VMS allowed kernel memory to be paged out (understand the difference between paging and swapping -- you mean paging). In particular, it allowed page tables to paged out, so that you could have a memory reference that resulted in a double fault: first, read the disk to load the page table, and then another read to load the user's memory.
Can you say s l o w.....?
In a well-written kernel, the amount of kernel memory that is NOT actively used is relatively tiny. That said, today's kernels have a LOT of bloat, just because DRAM is so cheap and kernel folks are so profligate with it.
Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.
And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.
Don’t wait like I did. Go ahead and start using these money secrets today!
1. Cancel Your Car Insurance
You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily,
Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.
And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.
Don’t wait like I did. Go ahead and start using these money secrets today!
1. Cancel Your Car Insurance
You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix.
Don’t waste your time browsing insurance sites for a better deal. A company called Insurify shows you all your options at once — people who do this save up to $996 per year.
If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you.
Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify and see how much you could save on car insurance.
2. Ask This Company to Get a Big Chunk of Your Debt Forgiven
A company called National Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit.
If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum.
On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of.
3. You Can Become a Real Estate Investor for as Little as $10
Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10.
An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting.
With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers.
Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties.
So if you want to get started in the world of real-estate investing, it takes just a few minutes to sign up and create an account with the Fundrise Flagship Fund.
This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing.
4. Earn Up to $50 this Month By Answering Survey Questions About the News — It’s Anonymous
The news is a heated subject these days. It’s hard not to have an opinion on it.
Good news: A website called YouGov will pay you up to $50 or more this month just to answer survey questions about politics, the economy, and other hot news topics.
Plus, it’s totally anonymous, so no one will judge you for that hot take.
When you take a quick survey (some are less than three minutes), you’ll earn points you can exchange for up to $50 in cash or gift cards to places like Walmart and Amazon. Plus, Penny Hoarder readers will get an extra 500 points for registering and another 1,000 points after completing their first survey.
It takes just a few minutes to sign up and take your first survey, and you’ll receive your points immediately.
5. This Online Bank Account Pays 10x More Interest Than Your Traditional Bank
If you bank at a traditional brick-and-mortar bank, your money probably isn’t growing much (c’mon, 0.40% is basically nothing).1
But there’s good news: With SoFi Checking and Savings (member FDIC), you stand to gain up to a hefty 3.80% APY on savings when you set up a direct deposit or have $5,000 or more in Qualifying Deposits and 0.50% APY on checking balances2 — savings APY is 10 times more than the national average.1
Right now, a direct deposit of at least $1K not only sets you up for higher returns but also brings you closer to earning up to a $300 welcome bonus (terms apply).3
You can easily deposit checks via your phone’s camera, transfer funds, and get customer service via chat or phone call. There are no account fees, no monthly fees and no overdraft fees.* And your money is FDIC insured (up to $3M of additional FDIC insurance through the SoFi Insured Deposit Program).4
It’s quick and easy to open an account with SoFi Checking and Savings (member FDIC) and watch your money grow faster than ever.
Read Disclaimer
5. Stop Paying Your Credit Card Company
If you have credit card debt, you know. The anxiety, the interest rates, the fear you’re never going to escape… but a website called AmONE wants to help.
If you owe your credit card companies $100,000 or less, AmONE will match you with a low-interest loan you can use to pay off every single one of your balances.
The benefit? You’ll be left with one bill to pay each month. And because personal loans have lower interest rates (AmONE rates start at 6.40% APR), you’ll get out of debt that much faster.
It takes less than a minute and just 10 questions to see what loans you qualify for.
6. Earn Up to $225 This Month Playing Games on Your Phone
Ever wish you could get paid just for messing around with your phone? Guess what? You totally can.
Swagbucks will pay you up to $225 a month just for installing and playing games on your phone. That’s it. Just download the app, pick the games you like, and get to playing. Don’t worry; they’ll give you plenty of games to choose from every day so you won’t get bored, and the more you play, the more you can earn.
This might sound too good to be true, but it’s already paid its users more than $429 million. You won’t get rich playing games on Swagbucks, but you could earn enough for a few grocery trips or pay a few bills every month. Not too shabby, right?
Ready to get paid while you play? Download and install the Swagbucks app today, and see how much you can earn!
The Linux kernel is a resource manager. It has an API sbrk() etc with which a running process indicates it's need for additional address space. Note that addresses generated by different processes are not unique but are mapped so as to select process specific memory. When the kernel exhausts total available ram, pages are released using various policies. Swap space is disk that can be used to copy dirty pages onto for later retrieval, so as to release pages for immediate consumption. Linux can also reload ram pages from executable files. Point is that a process never sees a pointer to swap spa
The Linux kernel is a resource manager. It has an API sbrk() etc with which a running process indicates it's need for additional address space. Note that addresses generated by different processes are not unique but are mapped so as to select process specific memory. When the kernel exhausts total available ram, pages are released using various policies. Swap space is disk that can be used to copy dirty pages onto for later retrieval, so as to release pages for immediate consumption. Linux can also reload ram pages from executable files. Point is that a process never sees a pointer to swap space. It gets memory into it's own address space / page tables only. Linux reclaims all of a process's ram when it exit()s the child process. Like when you get the command prompt again. Your login session is not the process. Every program that you run is a process and has it’s own address space. Elegant.
The “swap space” is just a disk space in form of a partition or afile that had been filled (pre-written), usually with zeroes. The memory pages of a waiting process can be written out to it (paged out) and later read back (paged in) by the most internal Kernel routines.
When a process tries to allocate more memory than is available on the system, inactive or least used pages of other processes are swapped out and the memory is given to the hungry process. With the help of the CPU’s Memory Management Unit the Kernel can detect that the formerly “robbed” process tried to access the page that had
The “swap space” is just a disk space in form of a partition or afile that had been filled (pre-written), usually with zeroes. The memory pages of a waiting process can be written out to it (paged out) and later read back (paged in) by the most internal Kernel routines.
When a process tries to allocate more memory than is available on the system, inactive or least used pages of other processes are swapped out and the memory is given to the hungry process. With the help of the CPU’s Memory Management Unit the Kernel can detect that the formerly “robbed” process tried to access the page that had been swapped out (page miss exception). In this case the Kernel reclaims a page from physical memory (a free one or swaps out page from a different process) and pages the missing page in for the process to continue to access the page that has been reloaded (paged back) into memory space of the process.
This is not visible to the running process normally. The only exception is the use of memory-mapped files (mmap).
Hence the paging, swap space, MMU and the Kernel play together, not having any “advantage” one over another. What can be discussed regarding “advantages” are different approaches (agorithms) to detect page activity, pick up a process to steal the physical memory page from and rearranging the process schedule under various process and memory stress levels.
Just look at the legendary Chuck Norris’s advice since he is now a whopping 81 years old and yet has MORE energy than me. He found a key to healthy aging… and it was by doing the opposite of what most of people are told. Norris says he started learning about this revolutionary new method when he noticed most of the supplements he was taking did little or nothing to support his health. After extensive research, he discovered he could create dramatic changes to his health simply focusing on 3 things that sabotage our body as we age.
“This is the key to healthy aging,” says Norris. “I’m living pro
Just look at the legendary Chuck Norris’s advice since he is now a whopping 81 years old and yet has MORE energy than me. He found a key to healthy aging… and it was by doing the opposite of what most of people are told. Norris says he started learning about this revolutionary new method when he noticed most of the supplements he was taking did little or nothing to support his health. After extensive research, he discovered he could create dramatic changes to his health simply focusing on 3 things that sabotage our body as we age.
“This is the key to healthy aging,” says Norris. “I’m living proof.”
Now, Chuck Norris has put the entire method into a 15-minute video that explains the 3 “Internal Enemies” that can wreck our health as we age, and the simple ways to help combat them, using foods and herbs you may even have at home.
I’ve included the Chuck Norris video here so you can give it a shot.
The operating system is trying to follow the plan its programmers gave it to make the best use of memory.
It has to have some free memory in order to make allocations immediately. A network or disk device may need some storage and can't wait for cache or swap reclaim.
File cache can be cleared out easily, but that isn't always the best decision. What is a better use, to keep the file cache that's been used once a minute for the last hour, or to keep some memory page that was used by a program during system start two days ago?
It is obviously better to swap out the old program data and keep the fi
The operating system is trying to follow the plan its programmers gave it to make the best use of memory.
It has to have some free memory in order to make allocations immediately. A network or disk device may need some storage and can't wait for cache or swap reclaim.
File cache can be cleared out easily, but that isn't always the best decision. What is a better use, to keep the file cache that's been used once a minute for the last hour, or to keep some memory page that was used by a program during system start two days ago?
It is obviously better to swap out the old program data and keep the file cache.
Memory “use” is more complicated than ordinary users understand and the reporting of memory use by various tools is neither very understandable nor consistent. So your claim that 1G of RAM is “not in use” could easily be false and you just aren’t seeing the use.
But even if 1GB if ram was really “not in use” when you looked, the use of swap is still valid because of previous use of that ram. You said “move … when” but I assume you didn’t really observe that. You observed “had previously moved to swap and then not moved back when …”
A running computer has stale anonymous ram use that once moved t
Memory “use” is more complicated than ordinary users understand and the reporting of memory use by various tools is neither very understandable nor consistent. So your claim that 1G of RAM is “not in use” could easily be false and you just aren’t seeing the use.
But even if 1GB if ram was really “not in use” when you looked, the use of swap is still valid because of previous use of that ram. You said “move … when” but I assume you didn’t really observe that. You observed “had previously moved to swap and then not moved back when …”
A running computer has stale anonymous ram use that once moved to swap has no good reason any time soon (or maybe ever) to get moved back.
Bottom line, Linux algorithms for managing memory, including decisions of when/what to swap, are quite good (far better than Windows) and the frequent user complaints about it represent misunderstandings about how OS’s manage memory.
If you don’t understand “anonymous” memory vs. “mapped” memory vs. “cache” memory, you don’t know enough of the basics to even understand an explanation of what is happening.
Windows is coded to invent lies about memory levels (free vs. used) in order to make ordinary users who don’t understand how it should work think it is all working as they expect (reduces annoying false problem reports). That gets real annoying when an expert is trying to manage an actual problem in Windows memory allocation and gets no direct info on what is really happening.
For an expert, Linux is also annoying because the mechanisms have changed and documentation not kept up. But that is only if you are trying to manage a system with difficult memory management issue, which is getting uncommon since ram is cheap.
Not every time, only when the allocator does not have enough available to satisfy the request (or with some allocators, when the allocation makes it short enough that it is worthwhile asking for more).
The OS can only supply entire pages. A good memory allocator asks as few times as it can to avoid the cost of system calls.
Most allocators that are any good request multiple pages at a time, because the system call to do so is expensive. Some will ask for hugepages if the process has been asking for lots of memory.
It all depends on the behaviour of the particular allocation algorithm, and there i
Not every time, only when the allocator does not have enough available to satisfy the request (or with some allocators, when the allocation makes it short enough that it is worthwhile asking for more).
The OS can only supply entire pages. A good memory allocator asks as few times as it can to avoid the cost of system calls.
Most allocators that are any good request multiple pages at a time, because the system call to do so is expensive. Some will ask for hugepages if the process has been asking for lots of memory.
It all depends on the behaviour of the particular allocation algorithm, and there isn’t necessarily just one on the system; in fact, there is very likely more than one. There are three common malloc implementations (glibc, tcmalloc and jemalloc), and applications often also roll their own for some or all allocations.
Did you mean to ask *NOT* running out of physical RAM?
If almost all of your physical RAMis is in use it iyou s normal for swap to be used regardless of swappiness. But you want to know about turnover.
You will “feel it”.
Swap in and swap out. Poll vmstat and look for the numbers under si and so.
Run System Monitor and watch the surge.
htop, nmon, glances, saidar etc,
A few keywords for you here.
The kernel is responsible for tracking all resources assigned to a process. This includes memory pages, both physical and swapped. At process exit time, the kernel simply goes through its data structures and releases the resources, returning them to their respective free pools (and, in secure variants, zeroing them first).
The question defines swapping loosely while talking about reading binaries.
Swapping is done to store pages that are not backed by file mappings on disk - like anonymous pages comprising of stack and heap areas of a process. Even when swap space in enabled, file binary data are evicted to their disk images on disk through the page cache and not to the swap p...
This is a truly complex question.
I have often said that the hardware of a computer runs at a certain speed, and that speed used to be constant and not affected by the software. Maybe your CPU runs ‘faster’ when it has no threads to run, but then it will only be idle.
The effective performance is influenced by the speed of the hardware, by environmental factors, and by the job load, that is all the software that is currently running at some point in time. Linux and Windows have the drawback that whenever your create a process, an image is created in memory and it become eligible for CPU scheduli
This is a truly complex question.
I have often said that the hardware of a computer runs at a certain speed, and that speed used to be constant and not affected by the software. Maybe your CPU runs ‘faster’ when it has no threads to run, but then it will only be idle.
The effective performance is influenced by the speed of the hardware, by environmental factors, and by the job load, that is all the software that is currently running at some point in time. Linux and Windows have the drawback that whenever your create a process, an image is created in memory and it become eligible for CPU scheduling. By default, there is no batch queuing mechanism.
The Linux kernel is responsible for managing multiple parallel threads and for managing CPU cycles, memory, network bandwidth, etc. The system operator is really responsible and tries to tune the system for optimal performance. The more threads, the more CPU cycles, RAM, and I/O bandwidth the applications demand, the bigger the system overhead and the lower the efficiency.
Short answer: Adding RAM is the only way to decrease paging for the same program load.
Long answer: Memory is managed as a set of fixed-sized pages. To be accessed (as program or data) a page must be in RAM. The virtual memory system writes modified pages to the disk (“page out”) so the RAM may be reused quickly if needed. How often “dirty” pages are written depends on “memory pressure,” the demand for RAM vs. how much RAM is available. At low memory pressure the VM system doesn’t page out anything. As the memory pressure increases page outs happen faster and faster as the VM system tries to st
Short answer: Adding RAM is the only way to decrease paging for the same program load.
Long answer: Memory is managed as a set of fixed-sized pages. To be accessed (as program or data) a page must be in RAM. The virtual memory system writes modified pages to the disk (“page out”) so the RAM may be reused quickly if needed. How often “dirty” pages are written depends on “memory pressure,” the demand for RAM vs. how much RAM is available. At low memory pressure the VM system doesn’t page out anything. As the memory pressure increases page outs happen faster and faster as the VM system tries to stay ahead of the demand for fresh RAM.
Lowering memory pressure reduces paging. Running smaller programs or adding more RAM are the only practical ways of lowering memory pressure.
They can.
Back in the ’90s I wrote a POSIX like operating system for a Commodore C64. With only 64k addressable bytes available, paging was essential. The 64 had a MMU that could switch back and forth between memory pages. I wired up a RAM board with an additional 128k RAM and some support chips, effectively giving me a 256k (slightly less due to 8k of ROM on page one) Commodore. I did have to brea
They can.
Back in the ’90s I wrote a POSIX like operating system for a Commodore C64. With only 64k addressable bytes available, paging was essential. The 64 had a MMU that could switch back and forth between memory pages. I wired up a RAM board with an additional 128k RAM and some support chips, effectively giving me a 256k (slightly less due to 8k of ROM on page one) Commodore. I did have to break out the MMU and modify the PC board a bit...
Well in the in time I studied one OS at college the pages were flagged with a time each time they were used, then each time the OS ran short of pages it could just reuse the oldest.
There was no particular check upon which process was using the page, which leads to some interesting situations when there’s not enough page space.
😎
If I understand your question, with swap on, why do you not see more available RAM.
The swap space is an overflow space, and it is not an extension of RAM.
When RAM gets full, and you have defined a swap space, Linux will move some functions or data to the swap space in order to free up the available ram. Generally, the algorithm for output to swap chooses low usage functions. These are functions not used for a relatively long time, and functions that are not pinned.
What is not sent to swap: Memory that has device drivers (real memory where virtual address is the real address)
Memory that is bein
If I understand your question, with swap on, why do you not see more available RAM.
The swap space is an overflow space, and it is not an extension of RAM.
When RAM gets full, and you have defined a swap space, Linux will move some functions or data to the swap space in order to free up the available ram. Generally, the algorithm for output to swap chooses low usage functions. These are functions not used for a relatively long time, and functions that are not pinned.
What is not sent to swap: Memory that has device drivers (real memory where virtual address is the real address)
Memory that is being used as a I/O buffer. A short duration pin of virtual memory that becomes pinned, and when the I/O is complete some milliseconds later, the memory is unpinned.
Pinned memory is virtual memory marked as not-swappable. As mentioned, some memory for device drivers needs real hardware memory, to function, not virtual memory. That memory is long term pinned.
Both physical and virtual memory are divided into fixed-size blocks called pages. This paging system allows efficient management of memory and enables the swapping mechanism between RAM and disk.
Virtual Memory Management : - Linux employs a demand-paging system, where virtual memory pages are only brought into physical memory when they are needed. The kernel uses a combination of physical memory (RAM) and swap space to create the illusion of a larger address space than physically available. When an application accesses a virtual memory page that is not currently in physical memory, the kernel
Both physical and virtual memory are divided into fixed-size blocks called pages. This paging system allows efficient management of memory and enables the swapping mechanism between RAM and disk.
Virtual Memory Management : - Linux employs a demand-paging system, where virtual memory pages are only brought into physical memory when they are needed. The kernel uses a combination of physical memory (RAM) and swap space to create the illusion of a larger address space than physically available. When an application accesses a virtual memory page that is not currently in physical memory, the kernel retrieves it from swap space (if it's swapped out) or from the original data source.
Page Replacement Policies : - Linux uses various page replacement algorithms to decide which pages to evict from physical memory when it needs to make room for new pages. Pages that have not been accessed recently or are less frequently used are more likely to be selected for eviction.
Swappiness : - Swappiness is a kernel parameter that determines how aggressively the system swaps out memory pages to swap space. A higher swappiness value indicates a more aggressive approach to swapping, while a lower value means the kernel prefers to keep more pages in physical memory. Administrators can adjust the swappiness value based on system workload and performance requirements.
Memory Pressure Handling : - The Linux kernel continuously monitors memory usage and system performance metrics to detect memory pressure. When the system experiences memory pressure the kernel may trigger mechanisms such as page reclamation, memory compaction, or process termination to alleviate the pressure and maintain system stability.
It depends on the OS, but in many, including Linux and OS X, unused RAM is used as file cache and when more memory is needed, the OS shrinks the cache and makes the RAM available to the process requesting it. Once physical RAM is consumed, the system dips into swap and performance begins to suffer.
I prefer to add a large amount of physical RAM in a system to avoid ever hitting swap, then set limits on the number and size of user processes. That way they can't exceed their fair share of system resources and affect system performance and stability.
The question is “Does Linux allocate at least an entire page whenever you request memory?”
That’s hard to answer because your question is using these terms conversationally (and loosely) and not technically (and precisely.)
First and foremost, Linux refers only to the kernel.
Secondly, you (a person) never request memory. An agent (a device driver, a program, a part of the kernel) acting on your behalf (sometimes very indirectly) will interact with the kernel to request memory, but not necessarily at the time a C library function like malloc() or calloc() is called - or even when the *nix APIs br
The question is “Does Linux allocate at least an entire page whenever you request memory?”
That’s hard to answer because your question is using these terms conversationally (and loosely) and not technically (and precisely.)
First and foremost, Linux refers only to the kernel.
Secondly, you (a person) never request memory. An agent (a device driver, a program, a part of the kernel) acting on your behalf (sometimes very indirectly) will interact with the kernel to request memory, but not necessarily at the time a C library function like malloc() or calloc() is called - or even when the *nix APIs brk() or sbrk() are called.
Thirdly, most memory allocation (even by the kernel) is indirect - various pools of memory for different purposes are kept and it’s only when those pools grow is memory allocated by the virtual memory system - which does allocate things in pages (for whatever a page is for the virtual memory system.)
For user-programs, sometimes memory is not allocated (by the virtual memory system) until it’s actually referenced (read from, written to, or executed from.) User programs in Linux usually interact with with the kernel’s memory management when loading a program (with the exec() family of functions), sharing memory, modifying memory, growing the stack (i.e.: function calls), and growing the data segment (heap) - by requesting memory through malloc()/calloc() (directly or indirectly) that larger than can be satisfied by any already existing space being managed by the malloc()/calloc() functions.
Memory management through malloc()/calloc() may allocate memory internally in certain minimum chunks in order to optimize alignment for the CPU architecture - i.e.: for requests larger that a “double” (or perhaps a “double double”), the memory will be aligned on those boundaries meaning that there might be multiple different pools of aligned memory to allocate from - but that’s the C-library, not Linux doing that.)
As John answered context switch has nothing to do with the process swapping out, but there is one thing which kernel does intelligently for TLB flush during a context switch:
So, during a context switch instead of flushing and invalidating all TLB entries, we only invalidate TLB entries related to that particular process translation table entries.
Now how does we know how to do this ?
Based on the ASID value of the process(each process in linux will have a unique ASID value) we know which all TLB entries needs to be invalidated on context switch of that process.
Above answer is how it happens o
As John answered context switch has nothing to do with the process swapping out, but there is one thing which kernel does intelligently for TLB flush during a context switch:
So, during a context switch instead of flushing and invalidating all TLB entries, we only invalidate TLB entries related to that particular process translation table entries.
Now how does we know how to do this ?
Based on the ASID value of the process(each process in linux will have a unique ASID value) we know which all TLB entries needs to be invalidated on context switch of that process.
Above answer is how it happens on ARM architecture, I assume this should be same for other archs as well.
The O/S might allocate a whole page to your application if required- the O/S tends to over allocate.
But remember that a call to malloc for example will only cause a O/S request if the process can no longer allocate the memory demanded by the program out of the existing ‘working set’.
When a process exits, the Linux kernel tracks which pages were allocated for that process by checking the page tables. The page tables contain information about the pages that were allocated, such as their physical address, the virtual address they are mapped to, and the permissions associated with them.
When a process exits, the kernel will first free any pages that are still in physical memory. This is done by marking the page table entries as not in use and then freeing the physical memory associated with the page.
For swapped pages, the kernel will check the page tables to see if the page is
When a process exits, the Linux kernel tracks which pages were allocated for that process by checking the page tables. The page tables contain information about the pages that were allocated, such as their physical address, the virtual address they are mapped to, and the permissions associated with them.
When a process exits, the kernel will first free any pages that are still in physical memory. This is done by marking the page table entries as not in use and then freeing the physical memory associated with the page.
For swapped pages, the kernel will check the page tables to see if the page is marked as swapped. If it is, the kernel will then free the physical memory associated with the page, and then free the swap space associated with the page. TO SUPPORT FOLLOW ME!
Um, it gets changed!
The approximate method for doing a write to a file is as follows. Each “open file” has a reference count active on the file itself, so the file cannot be truly deleted until the last copy is closed. Each open file also has a “seek point”. Then when a “write” call happens, it comes with a buffer and length of data to be written at the current seek point for that open file.
Next, the file system code in the kernel breaks up the write into units that do not cross page boundaries, so let’s just think about one such chunk. The file system code says “give me a pointer to page N of
Um, it gets changed!
The approximate method for doing a write to a file is as follows. Each “open file” has a reference count active on the file itself, so the file cannot be truly deleted until the last copy is closed. Each open file also has a “seek point”. Then when a “write” call happens, it comes with a buffer and length of data to be written at the current seek point for that open file.
Next, the file system code in the kernel breaks up the write into units that do not cross page boundaries, so let’s just think about one such chunk. The file system code says “give me a pointer to page N of file F”. If the page is already in the page cache, the pointer is immediately returned. If the page is not in the page cache, the requesting process blocks and disk I/O is scheduled to read the requested page into the page cache.
Once the pointer to the page cache is returned, the requesting process copies the write data into the page cache page and marks that page “dirty”. The dirty mark requires that the page be written back to disk at some point.
If the write is for a complete page that is not in the page cache, you can skip the disk read and just return an empty page to be filled in.
Things are more complicated in real life because certain metadata operations must be written back to disk in a certain order to assure file system integrity in the event of a crash at a bad moment.
The key fact here is that the page cache is shared by all processes, so even if multiple processes have the same file “open” they all read and write the same set of pages.
PS There are additional complications if you think that all writes should be “atomic” in that each process that does a write would have to get write locks on all affected pages before writing any of them. I’m not sure systems do this, but they could.
PPS One very special case is “append” when one writes to the end of a file. These sorts of writes do need to be atomic, so that different processes can append to log files and not get a tangled mess. It is also important that only one process at a time “extend” a file by allocating new pages, since it would be bad to duplicate efforts.
The function responsible for the activation of a swap area is sys_swapon() and it takes two parameters, the path to the special file for the swap area and a set of flags.
The top-level function for reading and writing to the swap area is rw_swap_page(). This function ensures that all operations are performed through the swap cache to prevent lost updates. rw_swap_page_base() is the core function which performs the real work.
The function responsible for deactivating an area is called sys_swapoff(). This function is mainly concerned with updating the swap_info_struct. The major task of paging i
The function responsible for the activation of a swap area is sys_swapon() and it takes two parameters, the path to the special file for the swap area and a set of flags.
The top-level function for reading and writing to the swap area is rw_swap_page(). This function ensures that all operations are performed through the swap cache to prevent lost updates. rw_swap_page_base() is the core function which performs the real work.
The function responsible for deactivating an area is called sys_swapoff(). This function is mainly concerned with updating the swap_info_struct. The major task of paging in each paged-out page is the responsibility of try_to_unuse() which is extremely expensive. For each slot used in the swap_map, the page tables for processes have to be traversed searching for it. In the worst case, all page tables belonging to all mm_structs may have to be traversed.
Read more: Swap Management
If you look in /proc, the pseudo file system used by Linux to keep track of data structures in the kernel, you will see that each process has a “directory” assigned to it that has all the information on pages, status, etc. Example for one process (a browser tab), showing the /proc listing and part of the memory map listing:
[code]$ cd /proc
$ ls 816367
arch_status cwd mem
If you look in /proc, the pseudo file system used by Linux to keep track of data structures in the kernel, you will see that each process has a “directory” assigned to it that has all the information on pages, status, etc. Example for one process (a browser tab), showing the /proc listing and part of the memory map listing:
[code]$ cd /proc
$ ls 816367
arch_status cwd mem patch_state stat
attr environ mountinfo personality statm
autogroup exe mounts projid_map status
auxv fd mountstats root syscall
cgroup fdinfo net sched task
clear_refs gid_map ns schedstat timens_offsets
cmdline io numa_maps sessionid timers
comm limits oom_adj setgroups timerslack_ns
coredump_filter loginuid oom_score smaps uid_map
cpu_resctrl_groups map_files oom_score_adj smaps_rollup wchan
cpuset maps pagemap stack
$ more 816367/maps
3b000000000-3b100000000 ---p 00000000 00:00 0
3b100000000-3b100100000 rw-p 00000000 00:00 0
3b100100000-3b100101000 ---p 00000000 00:00 0
3b100101000-3b10011f000 rw-p 00000000 00:00 0
3b10011f000-3b100121000 ---p ...
A default setup of any browser on any OS has the browser cache located in a specific folder on the filesystem. When any part of the web page is loaded, the respective files are downloaded into the cache folder and rendered by the corresponding part of the browser engine.
Assuming that you are asking about saving the state of the virtual machine, a.k.a. taking a “snapshot”, which for most VMMs means saving the copy of the VM’s RAM at the moment of doing it, plus the copy of the VM’s virtual disk image at the same moment (some VMMs know how to do incremental copies of their VMs, so the whole disk
A default setup of any browser on any OS has the browser cache located in a specific folder on the filesystem. When any part of the web page is loaded, the respective files are downloaded into the cache folder and rendered by the corresponding part of the browser engine.
Assuming that you are asking about saving the state of the virtual machine, a.k.a. taking a “snapshot”, which for most VMMs means saving the copy of the VM’s RAM at the moment of doing it, plus the copy of the VM’s virtual disk image at the same moment (some VMMs know how to do incremental copies of their VMs, so the whole disk is not duplicated somewhere but used as a new baseline for the saved state of the VM instead, etc.), we can say that the entire browser “state”, both in RAM and on disk, including the whole disk state (which contains the filesystem which contains the folder with the cache) is included into that snapshot.
However, depending on what you are doing with the VM, the VM snapshot can be dependent on the VM disk image (As any incremental backup can be), or can be easily destroyed later, e.g. when you resume execution or take another snapshot while deleting this one, etc., so if you really need to do something with that data, better figure out some way of extracting it.
I operated as a Systems Administrator a few years ago, and I had an issue on a system implemented to operate a variety of web servers, utilizing SWAP too frequently and reducing performance of the web applications being utilized. I resolved this issue by designing, and implementing the applications I desired as docker containers and increasing the RAM available to the Docker host.
Sure:
- sudo sysctl -w vm.drop_caches=3
You can read the documentation on this yourself. Doing it is generally a bad idea, because free memory is wasted memory. (And the kernel can easily “scavenge” cache to satisfy other uses - in particular, it ensures that the amount of cache that’s “dirty” (requiring IO to free) is limited.)
I see my Linux system using swap space even if there is still memory available, so the answer is yes. I don’t know about Windows, but I guess Windows also does something like that also.
And I am not a kernel developer or something, but I guess the operating system makes some guess for stuff that is unlikely to be used again, like initialization code, and swaps it out when not busy, to be able to use the freed up memory later.