Sort
Profile photo for Paul Jakubik

The diagram you found does a good job of showing what hyper threading can do, but doesn’t explain it. Let’s start by figuring out what this diagram is showing us.

Hyper Threading

Each of the 4 big and bold rectangles above is a single CPU. Each of the 9 rows inside of each CPU rectangle is a clock cycle. Each row in a single CPU is divided into 3 small rectangles. For these CPUs, they can do 3 things each clock cycle, so 3 rectangles.

Unfortunately when we look at the 1 CPU in A, or the 2 CPUs in B, the CPUs spend a lot of time doing less than 3 things each clock cycle. Each CPU is executing a si

The diagram you found does a good job of showing what hyper threading can do, but doesn’t explain it. Let’s start by figuring out what this diagram is showing us.

Hyper Threading

Each of the 4 big and bold rectangles above is a single CPU. Each of the 9 rows inside of each CPU rectangle is a clock cycle. Each row in a single CPU is divided into 3 small rectangles. For these CPUs, they can do 3 things each clock cycle, so 3 rectangles.

Unfortunately when we look at the 1 CPU in A, or the 2 CPUs in B, the CPUs spend a lot of time doing less than 3 things each clock cycle. Each CPU is executing a single thread as fast as it can, and just can’t come up with 3 things it can do each clock cycle. Thread 1 manages to come up with 3 things in a single clock cycle just 1 time out of 9 cycles. Thread 2 only manages to come up with 3 things for a clock cycle just 2 times.

Even worse, both Thread 1 and Thread 2 have 2 clock cycles where the CPU isn’t able to do anything.

In C we see what hyper threading can do. A single CPU looks at 2 threads of work and figures out how it can do a little bit for Thread 1 at the same time as it does a little bit for Thread 2. A single CPU in C does the same amount of work in 9 cycles that 2 CPUs did in B. The magic of hyper threading is that a single CPU looks at 2 threads of work simultaneously, and does a better job of keeping itself busy than can be done with a single thread of work.

Time Slicing

Let’s look at the same diagram again, and try to figure out what time slicing would do.

None of the examples above show time slicing. A and B show each CPU running a single thread. C shows a single CPU running two threads simultaneously.

For time slicing, imagine a single CPU trying to run the two threads shown in B. The CPU might do the first 3 clock cycles of work for Thread 1, then the first 3 clock cycles of Thread 2, and keep going back and forth until, after 18 clock cycles you have done as much work with time slicing as was done with 9 cycles of hyper threading.

The only problem with this comparison is that it assumes that there is 0 cost to time slicing, but as other people have pointed out, time slicing isn’t something implemented in the CPU itself. It is an operating system thing. So each time you switch from Thread 1 to Thread 2, there are several CPU cycles of saving where you are in Thread 1, then restoring where you left off in Thread 2, and then actually doing some Thread 2 work. Every switch from one thread to another requires lots of CPU cycles to save your place in one thread, and find where you left off in another.

Once you do all that work for time slicing, your CPU is still running a single thread, and isn’t able to keep itself as busy as it could.

Summary

With hyper threading, the CPU is able to do more work each clock cycle because the CPU looks at 2 threads to try to find work to occupy each cycle. With time slicing, nothing happens to get more work done each clock cycle. Instead, a lot of extra cycles are used each time the CPU switches from running one thread to another.

Profile photo for Simon Martin

This question actually has a few misconceptions. I try to answer the question and sort out the misconceptions. Hope you don't get too bored as the answer is a bit long.

A process is a collection of resources: memory, files, threads, etc. A thread is a resource which allocates CPU time. A process can have 0 or more threads associated at any one time. If a process has more than 1 thread then
it is multithreaded. There is only one type of multithreading.

If a multithreaded process is running on a computer that only has one core then at any one time only one of the threads is active at any time. If

This question actually has a few misconceptions. I try to answer the question and sort out the misconceptions. Hope you don't get too bored as the answer is a bit long.

A process is a collection of resources: memory, files, threads, etc. A thread is a resource which allocates CPU time. A process can have 0 or more threads associated at any one time. If a process has more than 1 thread then
it is multithreaded. There is only one type of multithreading.

If a multithreaded process is running on a computer that only has one core then at any one time only one of the threads is active at any time. If the computer has two or more cores then the multiple threads can potentially be running at the same time. This makes thread synchronization more important as race conditions, deadlocks, etc. are more likely.

Each core belongs to a processor. If a processor only has one core then it has access to all the processor resources at any time, however if there are 2 or more cores sharing the same processor resources and they require access to the same CPU resource one will have to wait for the other(s) to finish.

The operating system must know about hyperthreading in order to handle core scheduling efficiently. As you can see, if the system has 2 running threads then it is more efficient to have these running on cores that belong to different processors as there won't be any CPU resource contention.

A core is an instruction pipeline and associated register file and cache. I leave register file and cache as homework and carry on looking at pipelines ;-).

To understand what a pipeline is and it's relationship processors and cores, we have to understand a bit about how a CPU works.

In order to process an instruction a CPU has to perform different operations. In general we have: read instruction, decode instruction, optionally read operands, process, optionally write result. If we have separate circuits in the processor to do each of these steps then we can actually do them in parallel, so we can end up with a sequence like follows:

step 0: read instruction 0
step 1: read instruction 1 and decode instruction 0
step 2: read instruction 2, decode instruction 1 and read operands instruction 0
....

This is called a pipeline. Pipelines improved processor efficiency greatly. But now if we look closely at these different steps we start to see other inefficiencies, when we access RAM the processor is just sat there waiting for external hardware, if the operation is an integer comparison, then the FPU is idle, so the idea was born of having more than one pipeline, and hyperthreading was born.

Where do I start?

I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.

Here are the biggest mistakes people are making and how to fix them:

Not having a separate high interest savings account

Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.

Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.

Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of th

Where do I start?

I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.

Here are the biggest mistakes people are making and how to fix them:

Not having a separate high interest savings account

Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.

Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.

Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of the biggest mistakes and easiest ones to fix.

Overpaying on car insurance

You’ve heard it a million times before, but the average American family still overspends by $417/year on car insurance.

If you’ve been with the same insurer for years, chances are you are one of them.

Pull up Coverage.com, a free site that will compare prices for you, answer the questions on the page, and it will show you how much you could be saving.

That’s it. You’ll likely be saving a bunch of money. Here’s a link to give it a try.

Consistently being in debt

If you’ve got $10K+ in debt (credit cards…medical bills…anything really) you could use a debt relief program and potentially reduce by over 20%.

Here’s how to see if you qualify:

Head over to this Debt Relief comparison website here, then simply answer the questions to see if you qualify.

It’s as simple as that. You’ll likely end up paying less than you owed before and you could be debt free in as little as 2 years.

Missing out on free money to invest

It’s no secret that millionaires love investing, but for the rest of us, it can seem out of reach.

Times have changed. There are a number of investing platforms that will give you a bonus to open an account and get started. All you have to do is open the account and invest at least $25, and you could get up to $1000 in bonus.

Pretty sweet deal right? Here is a link to some of the best options.

Having bad credit

A low credit score can come back to bite you in so many ways in the future.

From that next rental application to getting approved for any type of loan or credit card, if you have a bad history with credit, the good news is you can fix it.

Head over to BankRate.com and answer a few questions to see if you qualify. It only takes a few minutes and could save you from a major upset down the line.

How to get started

Hope this helps! Here are the links to get started:

Have a separate savings account
Stop overpaying for car insurance
Finally get out of debt
Start investing with a free bonus
Fix your credit

Profile photo for Abhilash Vana

First of all, Hyper-threading is just a trademarked name given by the intel. There is nothing like an invention nor is hyper-threading any different from mainstream multi-processing.

Basically, In multi-threading, each process/task given to the CPU( 2 core 4 core or 8 core) is sliced into a number of threads (2,4,6,8,12).

The maximum number of threads any task/process will divide into depends upon the number of physical/hardware threads in each core of the CPU. For eg. Intel core i7 8700-k has 6 cores and each core has 12 physical threads.

So, A 3D rendering task will get divided into 12 threads

First of all, Hyper-threading is just a trademarked name given by the intel. There is nothing like an invention nor is hyper-threading any different from mainstream multi-processing.

Basically, In multi-threading, each process/task given to the CPU( 2 core 4 core or 8 core) is sliced into a number of threads (2,4,6,8,12).

The maximum number of threads any task/process will divide into depends upon the number of physical/hardware threads in each core of the CPU. For eg. Intel core i7 8700-k has 6 cores and each core has 12 physical threads.

So, A 3D rendering task will get divided into 12 threads per core(in i7 8700k). If it requires single core it will take a single core if it requires more it will take more number of cores(as per the task size and OS scheduling).

In hyper-threading, The task given to the CPU is sliced in a number of threads just like in multi-threading but in addition to that the OS will see a single physical core of CPU into two virtual cores. For eg, a quadcore processor with hyperthreading will be like an octacore processor to the OS.

Number of physical threads per core remains the same, that means upon seeing a single core as two cores the number of threads possessed by that single core wont double but the size of each thread will reduce.

So, hyperthreading is much like multithreaded multiprocessing where a single Core of CPU is seen as a dual core by the OS.

Profile photo for Alec Cawley

In multi-threading, there is one real set of registers, and saved copies of these registers for each thread. When a thread switch occurs, the registers of the current thread have to be saved and the registers of the new thread loaded. A lengthy operation.

In hyper-threading there are separate hardware registers, including program counter, stack pointer and so on, for each hyper thread. So the hardware can switch threads simply by selecting register set A or register set B. If there is nothing blocking, one can imagine the CPU interleaving instruction starts for the two threads on alternate cloc

In multi-threading, there is one real set of registers, and saved copies of these registers for each thread. When a thread switch occurs, the registers of the current thread have to be saved and the registers of the new thread loaded. A lengthy operation.

In hyper-threading there are separate hardware registers, including program counter, stack pointer and so on, for each hyper thread. So the hardware can switch threads simply by selecting register set A or register set B. If there is nothing blocking, one can imagine the CPU interleaving instruction starts for the two threads on alternate clocks. But why bother? Two threads running at speed X are less efficient than one running at speed 2X.

But suppose thread A has to pause because it needs to access main memory - multiple levels of cache have failed to provide data. It has to pause for perhaps 200 clocks before the data turns up. In a single threaded machine, that is pure loss. In a hyper-threaded machine, thread B now gets all the instruction starts and runs at double speed - until it, too, has to fetch from memory. Then, if you have the ways, thread C can start. So clocks which would otherwise be wasted waiting for main memory or lower Fahd levels can be usefully used.

Profile photo for Assistant
Assistant

Hyper-Threading is Intel's proprietary technology that allows a single physical CPU core to act like two logical cores. This means that each core can handle two threads simultaneously, improving the efficiency of the CPU by better utilizing its resources. Hyper-Threading helps improve performance in multi-threaded applications by allowing a single core to execute multiple threads, reducing idle time and increasing throughput.

Key Features of Hyper-Threading:

  • Logical Cores: Each physical core is treated as two logical cores by the operating system.
  • Resource Sharing: The two threads share the core'

Hyper-Threading is Intel's proprietary technology that allows a single physical CPU core to act like two logical cores. This means that each core can handle two threads simultaneously, improving the efficiency of the CPU by better utilizing its resources. Hyper-Threading helps improve performance in multi-threaded applications by allowing a single core to execute multiple threads, reducing idle time and increasing throughput.

Key Features of Hyper-Threading:

  • Logical Cores: Each physical core is treated as two logical cores by the operating system.
  • Resource Sharing: The two threads share the core's resources, such as execution units and cache.
  • Performance Boost: It can improve performance in multi-threaded applications but doesn't double the performance, as the threads share resources.

Multithreading:

Multithreading, on the other hand, is a programming concept that allows multiple threads to exist within a single process, sharing the same resources but executing independently. It can be implemented on various architectures, including those without Hyper-Threading.

Differences between Hyper-Threading and Multithreading:

  1. Level of Operation:
    -
    Hyper-Threading: A hardware-level feature that enables simultaneous execution of threads on a single physical core.
    -
    Multithreading: A software-level approach where a program is designed to run multiple threads concurrently.
  2. Implementation:
    -
    Hyper-Threading: Specific to Intel CPUs and is a way to enhance the performance of physical cores.
    -
    Multithreading: Can be implemented in software on any multi-core processor, regardless of whether it has Hyper-Threading.
  3. Performance Gains:
    -
    Hyper-Threading: Provides performance improvements by better utilizing the physical core's resources.
    -
    Multithreading: The performance benefits depend on how well the threads are designed and how they utilize CPU resources, which can vary widely.

In summary, Hyper-Threading is a specific technology that enhances CPU performance by allowing multiple threads per core, while multithreading is a broader programming concept that allows an application to perform multiple tasks concurrently.

Profile photo for Johnny M

I once met a man who drove a modest Toyota Corolla, wore beat-up sneakers, and looked like he’d lived the same way for decades. But what really caught my attention was when he casually mentioned he was retired at 45 with more money than he could ever spend. I couldn’t help but ask, “How did you do it?”

He smiled and said, “The secret to saving money is knowing where to look for the waste—and car insurance is one of the easiest places to start.”

He then walked me through a few strategies that I’d never thought of before. Here’s what I learned:

1. Make insurance companies fight for your business

Mos

I once met a man who drove a modest Toyota Corolla, wore beat-up sneakers, and looked like he’d lived the same way for decades. But what really caught my attention was when he casually mentioned he was retired at 45 with more money than he could ever spend. I couldn’t help but ask, “How did you do it?”

He smiled and said, “The secret to saving money is knowing where to look for the waste—and car insurance is one of the easiest places to start.”

He then walked me through a few strategies that I’d never thought of before. Here’s what I learned:

1. Make insurance companies fight for your business

Most people just stick with the same insurer year after year, but that’s what the companies are counting on. This guy used tools like Coverage.com to compare rates every time his policy came up for renewal. It only took him a few minutes, and he said he’d saved hundreds each year by letting insurers compete for his business.

Click here to try Coverage.com and see how much you could save today.

2. Take advantage of safe driver programs

He mentioned that some companies reward good drivers with significant discounts. By signing up for a program that tracked his driving habits for just a month, he qualified for a lower rate. “It’s like a test where you already know the answers,” he joked.

You can find a list of insurance companies offering safe driver discounts here and start saving on your next policy.

3. Bundle your policies

He bundled his auto insurance with his home insurance and saved big. “Most companies will give you a discount if you combine your policies with them. It’s easy money,” he explained. If you haven’t bundled yet, ask your insurer what discounts they offer—or look for new ones that do.

4. Drop coverage you don’t need

He also emphasized reassessing coverage every year. If your car isn’t worth much anymore, it might be time to drop collision or comprehensive coverage. “You shouldn’t be paying more to insure the car than it’s worth,” he said.

5. Look for hidden fees or overpriced add-ons

One of his final tips was to avoid extras like roadside assistance, which can often be purchased elsewhere for less. “It’s those little fees you don’t think about that add up,” he warned.

The Secret? Stop Overpaying

The real “secret” isn’t about cutting corners—it’s about being proactive. Car insurance companies are counting on you to stay complacent, but with tools like Coverage.com and a little effort, you can make sure you’re only paying for what you need—and saving hundreds in the process.

If you’re ready to start saving, take a moment to:

Saving money on auto insurance doesn’t have to be complicated—you just have to know where to look. If you'd like to support my work, feel free to use the links in this post—they help me continue creating valuable content.

Profile photo for Behdad Esfahbod

The Wikipedia page on Hyper-threading has a lot of good information. In short: multithreading time-slicing is performed in software, by the Operating System, whereas Hyperthreading happens in hardware, by the CPU.

In Hyperthreading, a single CPU core is represented to the Operating System as two cores, and the OS schedules two tasks on the two "logical" cores as it would on two physical cores in a multi-processor system. The single physical CPU core will switch between the tasks on the two logical cores as it sees fit, eg, when one task is stalled waiting for data to be loaded, it switches to

The Wikipedia page on Hyper-threading has a lot of good information. In short: multithreading time-slicing is performed in software, by the Operating System, whereas Hyperthreading happens in hardware, by the CPU.

In Hyperthreading, a single CPU core is represented to the Operating System as two cores, and the OS schedules two tasks on the two "logical" cores as it would on two physical cores in a multi-processor system. The single physical CPU core will switch between the tasks on the two logical cores as it sees fit, eg, when one task is stalled waiting for data to be loaded, it switches to the other one.

What Hyperthreading essentially does is to make sure the CPU core has a lot of independent operations to keep it busy. This works because new CPUs are "superscalar", that is, they have instruction-level parallelism. Again, the Wikipedia page on Superscalar is a great reference.

Profile photo for Muhammad Taimour

Multi-threading So a Central Processing Unit (CPU), can be thought of as the brain of a computer. A CPU is composed of cores, and each core can complete one thread (task) at a time. Multi-threading is the ability of a core to perform multiple (typically 2) threads (tasks) at a time.

Multi-threading is the general concept of running multiple threads at once. A thread is like a train of thought for a computer. This can be done by many means, such as regularly switching which thread is running (rapid alternation on a human scale: not "truly" simultaneous) or having multiple cores and assigning thr

Multi-threading So a Central Processing Unit (CPU), can be thought of as the brain of a computer. A CPU is composed of cores, and each core can complete one thread (task) at a time. Multi-threading is the ability of a core to perform multiple (typically 2) threads (tasks) at a time.

Multi-threading is the general concept of running multiple threads at once. A thread is like a train of thought for a computer. This can be done by many means, such as regularly switching which thread is running (rapid alternation on a human scale: not "truly" simultaneous) or having multiple cores and assigning threads to each.

For example, a single CPU instruction might have to wait several cycles to fetch memory from RAM. Multithreading allows the CPU to move on to a different task while it is waiting for the first one to finish.

Hyper-threading is Intel's name for another such technique, which is to support multiple threads on a single core (two in this case), sharing many of the core's resources like ALUs. The idea is that a single thread often leaves many of those resources unused or even stalls entirely, so adding another thread can use the core's resources more effectively without adding much more hardware. The increased utilization can lead to an overall speedup, although not as much as when another core is added. In some cases it can even cause a slowdown, such as when the combined working sets can't fit in caches. Generally it helps somewhat.

Another easier way to say is that each CPU core can run a thread at a time. Intel technology called hyper threading allows the os to think one core can run two threads at a time.

Like on the highway one lane is one core . You add a toll gate for each lane. Still one lane is one core.

Now instead you add two toll gates per lane so drivers can select one of the two toll gates but then merge back to the same lane after that's still one core but you can sometimes do double the work. That's hyperthreading.

Your response is private
Was this worth your time?
This helps us sort answers on the page.
Absolutely not
Definitely yes
This search engine can reveal so much. Click here to enter any name, wait for it, brace yourself.
Profile photo for Masoud Gheysari

I'm glad you asked! I'm going to tell you a story. Gather on kids!

Once upon a time there was old microprocessors, or as many call them today, CPU's. Those CPU's had many different units inside them, such as ALU (arithmetic and logic unit), control unit (which decides what units have to work), FPU (floating point unit), MMU (memory management unit), etc. If you're familiar with how CPU's work, you'd know that a CPU fetches an instruction from memory, decodes it to see what does it mean, and utilizes the required units to do the task. And repeats this again, and again, and again... Millions time

I'm glad you asked! I'm going to tell you a story. Gather on kids!

Once upon a time there was old microprocessors, or as many call them today, CPU's. Those CPU's had many different units inside them, such as ALU (arithmetic and logic unit), control unit (which decides what units have to work), FPU (floating point unit), MMU (memory management unit), etc. If you're familiar with how CPU's work, you'd know that a CPU fetches an instruction from memory, decodes it to see what does it mean, and utilizes the required units to do the task. And repeats this again, and again, and again... Millions times per second.

So, the engineers of those old CPU's noticed that because a CPU has that many units, when it's running an instructions, most of its units are idle. For example when a CPU executes an instruction which adds two integers, only its ALU is used. Other units such as FPU, MMU, etc are idle. This is not optimized.

So they sat and used their brains to reach a solution. They needed to find a way to utilize those idle units, in order to increase performance. Their solution was very nice. They designed the newer CPU's in a way that they fetch two instructions at the same time from memory, and execute them simultaneously. They called this technology Superscalar.

It seemed great. It was a free performance improvement, because they didn't extend the CPU, didn't add any new unit or extension, nor increased its clock. But there was some problems. There was many situations where the two simultaneous instructions needed a same unit. Or sometimes, the second instruction needed the result of the first instruction for its calculations. In either of these two cases, the CPU had to wait for the first instruction to complete, before starting to execute the second one. The engineers added the required circuits to find these situations and delay the second instruction. This caused the performance improvement to suffer. In fact, while we expected double the performance, the performance improvement was at most 25%!

Also there were some other problems, such as branch predictions, cache invalidations, etc, but we don't need to learn them to know about HyperThreading.

OK. Fast forward a few years, when the Superscalar was the norm and it was implemented in every new advanced microprocessor, alongside other technologies. The technology improved and we had regular dual and quad core processors in our daily lives. Multiuser, multiprocessor, and multiprocessing Operating Systems found their ways into microcomputers and our desktop PC's, and they were able to simultaneously run many programs. The Operating Systems also started to support multicore systems, and programs and software libraries was rewritten to use these improvements in technology. That was when an idea was born!

The new bright engineers made an important decision. They removed the Superscalar system, and reconfigured it in a way that the CPU's pretend that they have two cores. It was the exact same processor with a single core, but it pretends that it's dual core. Why? I'm about to answer.

When the operating system sees two CPU cores, it orders them to execute two separate programs simultaneously. Two different instructions from two completely separate programs was fed to the two cores. But we only have one core. That's when HyperThreading would help us. When the two instructions is feed to the single core, it causes it to utilize more of its units. That's exactly like Superscalar. But now it's absolutely not possible that the second instruction depends on the result of the first instruction, because they're from two different programs. So the required circuits and logic to find this dependences is not needed anymore, and the processors would get simpler, cheaper, and the freed space would go to another unit. Also the performance would increase because there wouldn't be any waiting for the second instruction because of these.

Also the chance of the two instructions using same units is slightly decreased, because two different programs with different jobs, mostly use completely different instructions. Also, statically, when two instructions are near each other in a program, they tend to do similar tasks. So, we also improved our performance statistics-wise!

These improvements caused the performance to increase by about 60% for HyperThreading, versus at most 25% for Superscalar. You can say that with HT, you'd have two logical cores for every physical core, and each logical core is 80% of a real core. Or you can say that each core with HT is more than 1.5 real cores.

Profile photo for Davesh Shingari

Multithreading is a general term which encompasses coarse grain, fine grain and simultaneous multithreading.
Note that simultaneous Multithreading is also called hyper threading (just a fancy word). So let's start with each of the multi threading.

Coarse grain Multithreading is the case when a thread experiences a long delay event for eg. Cache miss (which will take long time to get it processed), and is flushed out from pipeline along with it register data. And it gets replaced by another thread and the first thread gets switched back once the second thread is done or gets a miss (there can

Multithreading is a general term which encompasses coarse grain, fine grain and simultaneous multithreading.
Note that simultaneous Multithreading is also called hyper threading (just a fancy word). So let's start with each of the multi threading.

Coarse grain Multithreading is the case when a thread experiences a long delay event for eg. Cache miss (which will take long time to get it processed), and is flushed out from pipeline along with it register data. And it gets replaced by another thread and the first thread gets switched back once the second thread is done or gets a miss (there can be other implementation scenarios)

Fine grain Multithreading is the case when threads get switched out and in, in each cycle. In this case data is retained in registers and the registers are partitioned thread wise. This is what is followed in GPUs.

Now let's start with SMT or hyper threading. In this you have simultaneous 2 threads instructions being executed simultaneously in one clock cycle. This is possible because of proper utilization and scheduling of superscalar unit.

So basically if you see then all of them are basically multi threading but differ only in fine-ness / granularity.

Comment if you need any specific information.

Profile photo for Quora User

Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.

And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.

Don’t wait like I did. Go ahead and start using these money secrets today!

1. Cancel Your Car Insurance

You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily,

Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.

And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.

Don’t wait like I did. Go ahead and start using these money secrets today!

1. Cancel Your Car Insurance

You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix.

Don’t waste your time browsing insurance sites for a better deal. A company called Insurify shows you all your options at once — people who do this save up to $996 per year.

If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you.

Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify and see how much you could save on car insurance.

2. You Can Become a Real Estate Investor for as Little as $10

Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10.

An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting.

With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers.

Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties.

So if you want to get started in the world of real-estate investing, it takes just a few minutes to sign up and create an account with the Fundrise Flagship Fund.

This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing.

3. Ask This Company to Get a Big Chunk of Your Debt Forgiven

A company called National Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit.

If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum.

On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of.

4. Stop Paying Your Credit Card Company

If you have credit card debt, you know. The anxiety, the interest rates, the fear you’re never going to escape… but a website called AmONE wants to help.

If you owe your credit card companies $100,000 or less, AmONE will match you with a low-interest loan you can use to pay off every single one of your balances.

The benefit? You’ll be left with one bill to pay each month. And because personal loans have lower interest rates (AmONE rates start at 6.40% APR), you’ll get out of debt that much faster.

It takes less than a minute and just 10 questions to see what loans you qualify for.

5. Earn as Much as $1K/Month Doing Simple Online Tasks

Is there such a thing as easy money? If you know your way around the web, there certainly is.

That’s because data is currency these days, and many companies are willing to pay cash for it — up to $1,000 per month.

Finding these companies can be time-consuming on your own. But a company called Freecash has compiled all sorts of quick cash tasks from about a dozen advertisers and market research companies thirsty for more data. Freecash has paid out over $13 million to users since 2019.

You can pick and choose your tasks and complete them at your convenience. The coins you earn from each completed task can be converted into things like Visa gift cards, Amazon gift cards, cryptocurrency or cold-hard PayPal cash.

Signing up for a Freecash account is easy and there’s no minimum amount you need to earn before you can cash out. And if you’ve got enough free time on your hands, you can join the ranks of Freecash users making more than $1,000 a month in extra cash.

Sign up here to see how much you could earn.

6. Skip the Interest Until 2026 With This Balance Transfer Card

Aiming to ditch high-interest payments and score cash back on everything you buy? Who isn’t, right?

This card makes a balance transfer easy and affordable, plus you can save money on interest while you earn rewards. With a lengthy 0% intro APR on balance transfers until 2026, you’ll get some well-deserved breathing room to pay down balances interest-free. Plus, a $200 cash bonus is waiting for you, and you’ll enjoy 2% cash back on everything you buy — helping you make the most of your everyday spending.

Here’s what makes this card a win-win:

  • $200 cash back bonus
  • Unlimited 2% cash back
  • $0 annual fee
  • 0% APR on balance transfers for 18 months

Get the most out of your spending. Learn more about this balance transfer card today.

7. Get a $250 Bonus and Zero Fees With This Checking Account

Are you tired of paying through the ear to keep your money in the bank? Let’s face it, we’ve all got bills, errands to run, and checks to cash every month— who has time to micromanage all these sneaky checking account fees?

Well, what if we told you we found a checking account that actually means it when it says, “no fees?” Even better... what if you could earn a $250 bonus when you sign up?

With a Capital One 360 Checking Account, you’ll get access to over 70,000 fee-free ATMs, 24/7 mobile deposit and account access through their top-rated banking app, and zero overdraft fees (if an approved transaction takes your account below $0).

There is no minimum deposit to open and maintain your account, and you’ll never pay monthly or maintenance fees to access your money. Plus, with the Capital One Early Paycheck feature, you can get your paycheck up to two days faster without paying extra fees for that, either—and with built-in Zelle access, sending money is a snap.

Does your checking account do all of that? Sign up for a Capital One 360 Checking Account and claim your $250 bonus!*

*Terms apply. Visit Capital One 360 for details.

HyperThreading is Intel’s proprietary implementation of Simultaneous Multi Threading (SMT). SMT is a form on multithreading commonly used today.
In SMT, more than one thread issues instructions (one or more instructions, depending on how wide the issue engine is ) every cycle.

Intel’s HyperThreading results in presence of two threads in a single core. In HyperThreading, we maintain separate state information for the two threads in a core. There is duplication of state but THERE IS NO DUPLICATION OF EXECUTION ENGINES (like integer units) PER THREAD. The stream of instructions coming from two th

HyperThreading is Intel’s proprietary implementation of Simultaneous Multi Threading (SMT). SMT is a form on multithreading commonly used today.
In SMT, more than one thread issues instructions (one or more instructions, depending on how wide the issue engine is ) every cycle.

Intel’s HyperThreading results in presence of two threads in a single core. In HyperThreading, we maintain separate state information for the two threads in a core. There is duplication of state but THERE IS NO DUPLICATION OF EXECUTION ENGINES (like integer units) PER THREAD. The stream of instructions coming from two threads in a core is shared across the execution engines. Think of it as the threads being multiplexed into the execution engines.

While HyperThreading enabled core appears as two different cores to the OS, HT core is not a true parallel core. Its just a duplication of state, and its performance wont be as good as the case where there is true physical threading (though HT cores offer major speedup over non HT cores due to the fact that a thread will have wait times during which the resources would be idle which could be utilized by a second thread.)

Profile photo for João Craveiro

Multithreading can refer to software multithreading (multiple thread handling as performed by the operating system) or hardware multithreading (mechanisms provided by the CPU to assist and improve thread handling).

Within hardware multithreading, you have Simultaneous Multithreading (SMT), whereby the CPU gives hardware support to have more than one thread executing in parallel. Nonsimultaneous approaches to multithreading only give an illusion of parallelism (by allowing faster thread switching than that performed solely at the operating system level).

HyperThreading is a commercial trademark

Multithreading can refer to software multithreading (multiple thread handling as performed by the operating system) or hardware multithreading (mechanisms provided by the CPU to assist and improve thread handling).

Within hardware multithreading, you have Simultaneous Multithreading (SMT), whereby the CPU gives hardware support to have more than one thread executing in parallel. Nonsimultaneous approaches to multithreading only give an illusion of parallelism (by allowing faster thread switching than that performed solely at the operating system level).

HyperThreading is a commercial trademark for Intel's proprietary implementation of SMT.

Profile photo for Vibhav Singh

Comparing these two would be like comparing me and my dog. Both are entirely different entities.

Multithreading refers to the general task of running more than one thread of execution within an operating system. Multithreading is more commonly known as "multiprocessing", which can include multiple system processes (a simple example on Windows would be, e.g., running Internet Explorer and Microsoft Word at the same time), or it can consist of one process that has multiple threads within it.

Hyperthreading, on the other hand, refers to a very specific hardware technology created by Intel, which al

Comparing these two would be like comparing me and my dog. Both are entirely different entities.

Multithreading refers to the general task of running more than one thread of execution within an operating system. Multithreading is more commonly known as "multiprocessing", which can include multiple system processes (a simple example on Windows would be, e.g., running Internet Explorer and Microsoft Word at the same time), or it can consist of one process that has multiple threads within it.

Hyperthreading, on the other hand, refers to a very specific hardware technology created by Intel, which allows a single processor core to utilize multiple threads of execution more efficiently. In other words, a CPU with hyperthreading is going to provide performance which is somewhat greater than a CPU which is otherwise the same but without hyperthreading, because the hyperthreaded CPU will be able to concurrently balance two (sometimes more, but hyperthreading is usually 2-way) threads of execution on a given system.

Going by the details Hyperthreading technology was not created by Intel . It did produce the first released use of SMT(Simultaneous Multithreading) but basic idea was not an Intel invention.

Check out this StackOver Flow answer for more details,Difference b/w hyper threading and multithreading?

Happy Quoring!!!

Profile photo for Travis Sturzl

Hyper threading is just a clever way to alleviate the slowdowns of context switching threads. It basically let's another process/thread state to be loaded even if it's not actually running in parallel, is basically allowing that core to get setup to quickly run the task and then alleviates the need to unload the entire context because there appears to be twice the amount of cores. A single core split into 2 hyper threads doesn't actually run both threads in parallel, but say you only had 2 threads running on the whole system you'd never have to context swith, whereas without hyper threading ev

Hyper threading is just a clever way to alleviate the slowdowns of context switching threads. It basically let's another process/thread state to be loaded even if it's not actually running in parallel, is basically allowing that core to get setup to quickly run the task and then alleviates the need to unload the entire context because there appears to be twice the amount of cores. A single core split into 2 hyper threads doesn't actually run both threads in parallel, but say you only had 2 threads running on the whole system you'd never have to context swith, whereas without hyper threading every time the scheduler switched threads the processor would have to load the state for the thread being scheduled. That might be a simplified explanation, but pretty much explains most of the benefits and use of hyper threading.

Multithreading is just giving a way for a single process to have multiple threads of execution to perform operations in parallel given that the hardware is capable, or to allow a process to perform work when one thread is blocked by an IO operation, such as the OS loading data from a hard drive. It would then allow you to avoid having to block the entire process waiting for the disk to finish reading. Hyper threading just allows some clever optimizations for multithreading without actually having to add another entire core.

Profile photo for Ravi Patel

Hyperthreading is Intel’s version of Simultaneous Multithreading (SMT). The core ideas and goal behind this concept are the same as regular threading. The key difference is granularity.

With SMT you are trying to fill in delay gaps between individual instructions at a nanosecond timescale.

Thread switching triggers a context switch for the CPU which is expensive (~micro to miliseconds) and is generally handled by the OS. SMT allows a core to operate on two different threads at the same time to better utilize the hardware (Execution units, buffers, tables, control logic).

Profile photo for Aritra Sen

In very simple terms Hyper threading is a term Intel uses to market the technique it uses to construct logical cores out of physical core(s). As a result the cpu is able to schedule more jobs in parallel onto the physical core(s).

While multi threading is a term more relevant to applications. An application is able achieve some degree of parallelism using multi threading. The application breaks up it's job into chunks and assigns them to different threads.These threads in turn are assigned to kernal threads by the os. Which are run on logical cores, created by the cpu out of physical cores.

Profile photo for Kevin Cameron

A compiler friend said it dropped out of a mechanism called "register renaming" that is used in pipelined processors. Generally pipelining is used to create a context in registers for the CPU process, but the register set exists in different states a different pipeline stages - R0 when you get to the end of the pipeline might be in other positions before then. Since certain operations can stall the pipeline, it can be advantageous to have another thread in the pipeline at the same time so that when one stalls the other may be able to proceed - making better use of the hardware.

In theory hyper

A compiler friend said it dropped out of a mechanism called "register renaming" that is used in pipelined processors. Generally pipelining is used to create a context in registers for the CPU process, but the register set exists in different states a different pipeline stages - R0 when you get to the end of the pipeline might be in other positions before then. Since certain operations can stall the pipeline, it can be advantageous to have another thread in the pipeline at the same time so that when one stalls the other may be able to proceed - making better use of the hardware.

In theory hyper threads sharing data shouldn't have the cache-coherency issues that threads on opposite sides of the caches have, and it should be easier to "fork" a thread with that kind of architecture, but it seems to add a lot of complexity for limited gain. Since the reallocation of resources is handled dynamically it appears to be too expensive to do with more than two threads. You are correct that hyper threads are not really running in parallel, and the technique relies on regular stalling events to get its benefits.

Profile photo for Phillip Remaker

Hyperthreading is a form of multithreading. Hyperthreading CPUs have a collection of duplicated hardware "support resources" around a single hardware execution unity to make that single execution unit look like a dual processor. When one execution thread stalls, the other can be serviced quickly without clearing state of the stalled thread.

Wikipedia covers this topic well at Hyper-threading.

Hyper-Threading Technology is a form of simultaneous multithreading technology introduced by Intel. Architecturally, a processor with Hyper-Threading Technology consists of two logical processors per c

Hyperthreading is a form of multithreading. Hyperthreading CPUs have a collection of duplicated hardware "support resources" around a single hardware execution unity to make that single execution unit look like a dual processor. When one execution thread stalls, the other can be serviced quickly without clearing state of the stalled thread.

Wikipedia covers this topic well at Hyper-threading.

Hyper-Threading Technology is a form of simultaneous multithreading technology introduced by Intel. Architecturally, a processor with Hyper-Threading Technology consists of two logical processors per core, each of which has its own processor architectural state. Each logical processor can be individually halted, interrupted or directed to execute a specified thread, independently from the other logical processor sharing the same physical core.[4]
Unlike a traditional dual-core processor configuration that uses two separate physical processors, the logical processors in a Hyper-Threaded core share the execution resources. These resources include the execution engine, the caches, the system-bus interface and the
firmware. These shared resources allow the two logical processors to work with each other more efficiently, and let one borrow resources from the other when one is stalled. A processor stalls when it is waiting for data it has sent for so it can finish processing the present thread. The degree of benefit seen when using a hyper-threaded or multi core processor depends on the needs of the software, and how well it and the operating system are written to manage the processor efficiently.[4]
Hyper-threading works by duplicating certain sections of the processor—those that store the
architectural state—but not duplicating the main execution resources. This allows a hyper-threading processor to appear as the usual "physical" processor and an extra "logical" processor to the host operating system (HTT-unaware operating systems see two "physical" processors), allowing the operating system to schedule two threads or processes simultaneously and appropriately. When execution resources would not be used by the current task in a processor without hyper-threading, and especially when the processor is stalled, a hyper-threading equipped processor can use those execution resources to execute another scheduled task. (The processor may stall due to a cache miss, branch misprediction, or data dependency.)

Profile photo for Quora User

Totally different “problem domains”.

Multithreading is a software thing - a program that has multiple concurrent paths of execution is called “multithreaded”.

Hyperthreading is a hardware thing - a single set of execution resources in a CPU core is fed by two separate pipelines / state machines and appears to the OS / applications as two distinct CPU cores (or CPUs, depending on how the OS interprets it).

Those two things are not comparable.

Profile photo for Éric Nunya

“Hyper-threading” is a feature on most Intel CPU’s, which appeared even before multi-core CPU’s became mainstream in personal computers.

I don’t remember all the details, but basically it’s like a separate “front door” to each CPU core. So, a dual-core processor without hyperthreading would be like a duplex (2 houses built together) while a single-core with hyper-threading would be more like one house that has 2 front doors.

Multithreading means when a process is running multiple threads simultaneously, regardless of the CPU’s capacity, number of cores etc.

Profile photo for Yasmeen Maroof Khan

The main difference between hyper threading and multithreading is that hyper threading converts a single physical processor into two virtual processors, and in multithreading executes multiple threads in a single process simultaneously.

Hyper threading is a technology developed by Intel to increase the performance of the CPU/processor. It allows a single CPU to run two threads.

On the other hand, multithreading is a mechanism that allows running multiple lightweight threads within a process at the same time. Each thread has their own program counter, stack, registers, etc.

Difference Between Hype

The main difference between hyper threading and multithreading is that hyper threading converts a single physical processor into two virtual processors, and in multithreading executes multiple threads in a single process simultaneously.

Hyper threading is a technology developed by Intel to increase the performance of the CPU/processor. It allows a single CPU to run two threads.

On the other hand, multithreading is a mechanism that allows running multiple lightweight threads within a process at the same time. Each thread has their own program counter, stack, registers, etc.

Difference Between Hyper Threading and Multithreading

Hyper Threading is a technology that allows a single processor to operate like two separate processors to the operating system and the application programs that use it.

Multithreading is a mechanism that allows multiple threads to exist within the context of a process, such that they execute independently but share their process resources.

Thus, this is the main difference between hyper threading and multithreading.

Functionality

In hyper threading, a physical processor is divided into two virtual or logical processors, whereas in multithreading, a process is divided into multiple threads.

Hence, this is another difference between hyper threading and multithreading.

Conclusion

The main difference between hyper threading and multithreading is that hyper threading converts a single physical processor into two virtual processors, while multithreading executes multiple threads in a single process simultaneously.

Profile photo for James Smith

Super-threading is a multithreading approach that weaves together the execution of different threads on a single processor without truly executing them at the same time. This qualifies it as time-sliced or temporal multithreading rather than simultaneous multithreading. It is motivated by the observation that the processor is occasionally left idle while executing an instruction from one thread. Super-threading seeks to make use of unused processor cycles by applying them to the execution of an instruction from another thread.

Multithreading computers have hardware support to efficiently execut

Super-threading is a multithreading approach that weaves together the execution of different threads on a single processor without truly executing them at the same time. This qualifies it as time-sliced or temporal multithreading rather than simultaneous multithreading. It is motivated by the observation that the processor is occasionally left idle while executing an instruction from one thread. Super-threading seeks to make use of unused processor cycles by applying them to the execution of an instruction from another thread.

Multithreading computers have hardware support to efficiently execute multiple threads. These are distinguished from multiprocessing systems (such as multi-core systems) in that the threads have to share the resources of single core: the computing units, the CPU caches and the translation lookaside buffer (TLB). Where multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by leveraging thread-level as well as instruction-level parallelism. As the two techniques are complementary, they are sometimes combined in systems with multiple multithreading CPUs and in CPUs with multiple multithreading cores.

Hyper-threading is Intel's trademarked term for its simultaneous multithreading implementation in their Pentium 4, Atom, Core i7, and certain Xeon CPUs. Hyper-threading (officially termed Hyper-Threading Technology or HTT) is an Intel-proprietary technology used to improve parallelization of computations (doing multiple tasks at once) performed on PC microprocessors. A processor with hyper-threading enabled is treated by the operating system as two processors instead of one. This means that only one processor is physically present but the operating system sees two virtual processors, and shares the workload between them. Hyper-threading requires both operating system and CPU support for efficient usage; conventional multiprocessor support is not enough, and may actually decrease performance if the Operating System is not sufficiently aware of the distinction between a physical core and a HTT-enabled core. For example, Intel does not recommend that hyper-threading be enabled under Windows 2000, even though the operating system supports multiple CPUs (but is not HTT-aware).

If you want to understand Hyper-threading, read this excellent article: A Journey Through the CPU Pipeline - General Programming - Articles - Articles.

Basically, instructions are broken up into many smaller instructions (say "fetch", "decode", "execute", "store" or something like that). It is possible to perform the "fetch" and the "decode" of two different instructions simultaneously.

The processor pipeline is a trick to take advantage of this fact: by looking forward a bit at the next instruction that will be executed, the processor can start performing the "decode" of the next instruction

If you want to understand Hyper-threading, read this excellent article: A Journey Through the CPU Pipeline - General Programming - Articles - Articles.

Basically, instructions are broken up into many smaller instructions (say "fetch", "decode", "execute", "store" or something like that). It is possible to perform the "fetch" and the "decode" of two different instructions simultaneously.

The processor pipeline is a trick to take advantage of this fact: by looking forward a bit at the next instruction that will be executed, the processor can start performing the "decode" of the next instruction before the "execute" of the current instruction has finished and be in effect executing both instructions at once.

Unfortunately, this only works as long as the processor can successfully predict the next instruction (branch prediction). If a processor guesses the wrong instruction, it must start its pipeline all over again at the correct instruction.

So where does hyper threading come in? As it turns out, a properly pipelined processors tends to be able to "execute" instructions faster than it can "fetch" and "decode" them. Consequently, Intel has added an additional "fetcher" and "decoder" to each processor to balance how much faster the "execute" part is. On a lot of workloads, this gives the effect of an "additional" processor.

If you're interested in learning more and the blog post above didn't satisfy you, you might want to flip through some of the slides covered in the excellent parallel computer architecture course at CMU: 15-418 Spring 2013

Profile photo for Turkishcouk

Hyper-threading and simultaneous multithreading are two techniques used to increase the performance of a computer system by allowing parallel execution of multiple threads on multiple processors. Hyper-threading is relatively old, and stems from Intel’s move to support symmetric multiprocessing in which two processor cores share the same physical core after 1995. Simultaneous multithreading is significantly newer, and IBM was the first to introduce this technique with Power3+ in 2001. The goal of simultaneous multithreading is to avoid idle time wherein a processor has nothing to do, whereby i

Hyper-threading and simultaneous multithreading are two techniques used to increase the performance of a computer system by allowing parallel execution of multiple threads on multiple processors. Hyper-threading is relatively old, and stems from Intel’s move to support symmetric multiprocessing in which two processor cores share the same physical core after 1995. Simultaneous multithreading is significantly newer, and IBM was the first to introduce this technique with Power3+ in 2001. The goal of simultaneous multithreading is to avoid idle time wherein a processor has nothing to do, whereby instructions belonging to an inactive thread are made idle by advancing instructions for an awaiting active thread.

Profile photo for Quora User

Hyperthreading is Intel name for something called SMT - Simultaneous Multi Threading.
SMT was/is mainly used in x86 and IBM Power architectures but is slowly entering ARM world too. Still no consumer CPU yet but.

If we look into modern CPU architecture every core is split into blocks to decode instruction and convert them into uOps (micro operations), scheduler to schedule uOps and executors to execute them (EU - execution unit). Analyzing how core works it became obvious how lot of time core is sitting idle - not utilized 100%. One very simple solution to keep utilization high was SMT.

So, SMT

Hyperthreading is Intel name for something called SMT - Simultaneous Multi Threading.
SMT was/is mainly used in x86 and IBM Power architectures but is slowly entering ARM world too. Still no consumer CPU yet but.

If we look into modern CPU architecture every core is split into blocks to decode instruction and convert them into uOps (micro operations), scheduler to schedule uOps and executors to execute them (EU - execution unit). Analyzing how core works it became obvious how lot of time core is sitting idle - not utilized 100%. One very simple solution to keep utilization high was SMT.

So, SMT is relatively simple hardware block (not many transistors) where single core has multiple register sets (CPU internal registers) and SMT takes care to schedule instruction decoding and execution. Execution path in core is called thread.
Consider every CPU (core) as a set of internal registers and register called PC (Program Counter) is used to tell which instruction is executed next. SMT has two sets of registers, two PCs where each PC points to different program. In other words, such single core has two threads. Two instructions are loaded from memory (two PCs, two threads) and decoded. Now decoder when creates uOps keeps info about thread. Scheduler does not need to care much but EUs keep track of threads too cause uOps must use their own register set.

From performance perspective SMT in x86 increase performance of multi-threaded loads by cca 10 - 50%, consider cca 20%. This is visible in Cinebench where difference with and without SMT is not double. But in some cases SMT might bring huge difference, not 100%.

While x86 is limited to 2 thread per core, IBM Power CPUs go up to 8 threads.
SMT is also source of various problems, eg used by hackers, and in some cases disabled to increase safety. But for consumer loads is better to enable it.

Profile photo for Bitrus David

Multithreading refers to the general task of running more than one thread of execution within an operating system. Multithreading is more generically called "multiprocessing", which can include multiple system processes (a simple example on Windows would be, e.g., running Internet Explorer and Microsoft Word at the same time), or it can consist of one process that has multiple threads within it.

Multithreading (or should I say, multiprocessing) is a software concept. Practically any Turing-complete CPU can perform multithreading, even if the computer only has one CPU core and that core does not

Multithreading refers to the general task of running more than one thread of execution within an operating system. Multithreading is more generically called "multiprocessing", which can include multiple system processes (a simple example on Windows would be, e.g., running Internet Explorer and Microsoft Word at the same time), or it can consist of one process that has multiple threads within it.

Multithreading (or should I say, multiprocessing) is a software concept. Practically any Turing-complete CPU can perform multithreading, even if the computer only has one CPU core and that core does not support hyperthreading. In order to support multiprocessing, the CPU will interleave execution of different threads of execution, by executing one, then another, then another, where the operating system will divide up the time available into "slices" and give a roughly equal amount of time to each thread (the time doesn't have to be equal, but that's typically how it's done unless a process requests a higher priority).

Note that, whenever there are more software threads of execution trying to execute at any given time than there are available hardware (simultaneous) threads of execution, then these software threads will be "interleaved" among the available cores. In the case of a "uniprocessor" (one CPU core with no hyperthreading), if you have more than one software thread, they will always be interleaved. If you have a 4-core CPU with hyperthreading, that's 8 "hardware threads", meaning the CPU can execute 8 simultaneous threads of execution at the same instant, so if you had 8 software threads trying to run, they could all run at once; but if you had 9 software threads, one of the hardware threads would have to interleave a pair of threads (the exact pair of threads chosen would depend on the operating system's scheduler implementation).

Hyperthreading, on the other hand, refers to a very specific hardware technology created by Intel, which allows a single processor core to interleave multiple threads of execution more efficiently. In other words, a CPU with hyperthreading is going to provide performance which is somewhat greater than a CPU which is otherwise the same but without hyperthreading, because the hyperthreaded CPU will be able to concurrently balance two (sometimes more, but hyperthreading is usually 2-way) threads of execution on a given core.

However, hyperthreading is strictly slower than having completely separate physical cores, because there are some types of operations that can disrupt the performance advantages of hyperthreading, while there are fewer operations that can cause such an event with completely separate cores.

Take the following example, where "1 core" is assumed to perform exactly the same in all examples:

Example 1: 2 cores, no hyperthreading.
Example 2: 4 cores, no hyperthreading.
Example 3: 2 cores with hyperthreading.
Example 4: 4 cores with hyperthreading.

In this case, Example 4 will always be fastest. Example 2 might sometimes be about as fast as Example 4, on workloads that are extremely poorly suited to taking advantage of hyperthreading's optimizations.

Example 3, on the other hand, might sometimes, on workloads where hyperthreading is most advantageous, be almost as fast as Example 2, even though it has half as many physical cores.

Example 1 of course, will be slowest of all the examples, but it might sometimes be about as fast as Example 3, when running a workload poorly suited to hyperthreading.

In real world benchmarks with modern Intel CPUs, we typically find that hyperthreading results in, speakly very generally, 20% to 40% improvement in performance compared to no hyperthreading (with the "no hyperthreading" case being benchmarked by disabling the hyperthreading feature in the BIOS). Occasionally there will be workloads where disabling hyperthreading shows a performance advantage, but these workloads can be rare in actual usage. But, if I had a choice between 4 cores with hyperthreading or 8 cores, assuming that each core itself has the same performance, I would choose the 8 core CPU every time.

source: What is the difference between multithreading and hyperthreading?

Profile photo for Aritra Das

Hyperthreading allows you to run two application simultaneously. It increases CPU performance with processor efficiency. Thereby allowing you to run Multiple demanding application without pc lagging. It is only found in 10th gen processor from i3 to i9 x - series. And 9th i9 and 9th x - series processor. It is very useful for people who want to do multitasking in everyday life. And sorry the first question answer is that is for example is i5 10600k has 6 cores and 12 threads I guess. And each core handles 2 threads. And you can run to 1 thread run one application at a time that means' two thre

Hyperthreading allows you to run two application simultaneously. It increases CPU performance with processor efficiency. Thereby allowing you to run Multiple demanding application without pc lagging. It is only found in 10th gen processor from i3 to i9 x - series. And 9th i9 and 9th x - series processor. It is very useful for people who want to do multitasking in everyday life. And sorry the first question answer is that is for example is i5 10600k has 6 cores and 12 threads I guess. And each core handles 2 threads. And you can run to 1 thread run one application at a time that means' two threads will run two application at a time and one physical core can handle two threads that means two application at a time. That will boost your work and same time take less time to process it and output will be faster than the previous-gen can do it. I think this ☝ thing has make your theory about hyperthreading cleared.

Everyone have a nice day😇😇😇

Profile photo for Robert Love

Hyper-threading (HT) is Intel's name for their Simultaneous Multithreading (SMT) implementation in x86 processors. SMT allows a processor to better utilize its functional units by permitting multiple threads of execution within a single processor core. SMT is very similar to multithreading in software in this regard.

With HT, x86 processors have two threads of execution per processor core. These threads of execution are called "virtual cores" or "HT cores." Each HT core has a unique set of registers, processor pipeline, instruction counter, and processor state. But there remains a fixed number

Hyper-threading (HT) is Intel's name for their Simultaneous Multithreading (SMT) implementation in x86 processors. SMT allows a processor to better utilize its functional units by permitting multiple threads of execution within a single processor core. SMT is very similar to multithreading in software in this regard.

With HT, x86 processors have two threads of execution per processor core. These threads of execution are called "virtual cores" or "HT cores." Each HT core has a unique set of registers, processor pipeline, instruction counter, and processor state. But there remains a fixed number of functional units per core—for example, one ALU (the thing that does integer math). The two HT cores share the resources of the processor. HT is (usually) faster than not due to processor stalls: The resources of the processor are more fully utilized by enabling multiple threads of execution.

Profile photo for Bogdan Margarit

Imagine you have a big container with 1000 balls of different colors. Your boss comes and asks you to separate the balls by their color, and put all the balls of each color into their own separate containers.

Single Thread

You start working, taking each ball individually, looking at its color and putting it into the appropriate container. After doing this 1000 times, you’ll be done. If it takes you 5 seconds to process each ball, the whole procedure will take approximately 83 minutes.

Multiple Threads

You call your two best friends and all of you start working at the same time. You divide the big

Imagine you have a big container with 1000 balls of different colors. Your boss comes and asks you to separate the balls by their color, and put all the balls of each color into their own separate containers.

Single Thread

You start working, taking each ball individually, looking at its color and putting it into the appropriate container. After doing this 1000 times, you’ll be done. If it takes you 5 seconds to process each ball, the whole procedure will take approximately 83 minutes.

Multiple Threads

You call your two best friends and all of you start working at the same time. You divide the big container in three parts, each of you handling roughly 333 balls. The same procedure is applied: you take a ball, look at it’s color and put it into the appropriate container. It still takes 5 seconds to process a single ball, but you and your friends can process 3 balls simultaneously, thus the total time to complete the job is reduced by three, taking approximately 27 minutes.

As you can see, in the second example, by adding more resources to the process (your two friends), we have managed to reduce the time needed to complete the job. This is essentially what multi-threading is about. There are also technical advantages, such as better usage of resources, if you have multiple CPUs or cores. If you don’t take advantage of them, it’s like having your two friends sitting on a bench doing nothing while you alone do all the work.

Using threads in Java

To achieve multi-threading in Java, the most rudimentary way is to make use of the Thread class. Firstly, you need to write the code that will be executed on the thread, and while you can do this by extending the Thread class, the recommended way is to implement the Runnable interface and pass it to the Thread class. Runnable is an interface that has a single method run() which you must implement, as this is the method that the thread will call.

Once you have your implementation of Runnable, you can create threads and pass them the code you want them to run. Here’s a very simple snippet to help you understand:

  1. public class MyClass { 
  2.  
  3. // This is our implementation of the Runnable, which we'll later pass to Thread 
  4. static class Worker implements Runnable { 
  5. @Override 
  6. public void run() { 
  7. // Everything in here will be executed on a different thread 
  8. System.out.println("Hello from " + Thread.currentThread().getName()); 
  9. } 
  10. } 
  11.  
  12. public static void main(String args[]) { 
  13. // We create few threads and passing them each a new Worker 
  14. Thread t1 = new Thread(new Worker()); 
  15. Thread t2 = new Thread(new Worker()); 
  16.  
  17. // Threads won't execute unless we call the start method.  
  18. // Note that calling the start method does not start the thread immediately.  
  19. t1.start(); 
  20. t2.start(); 
  21. } 
  22. } 

This example is so basic that there’s a high chance you won’t even see it much in production these days, unless you’re working on legacy code. The modern frameworks such as Spring come with abstractions that simplify a lot the way multi-threading is achieved.

Profile photo for Quora User

Okay, you want a processor to run fast. One way to run fast is clock the hardware at high speeds.

The problem is that the more silicon “stuff” changing state per every clock, the lower the clock speed necessarily has to be. This is because some stuff can’t do their thing until the stuff ahead of it has finished.

For a processor core, the “stuff” is roughly broken into four logical steps:

1. Fetch. Ge

Okay, you want a processor to run fast. One way to run fast is clock the hardware at high speeds.

The problem is that the more silicon “stuff” changing state per every clock, the lower the clock speed necessarily has to be. This is because some stuff can’t do their thing until the stuff ahead of it has finished.

For a processor core, the “stuff” is roughly broken into four logical steps:

1. Fetch. Get the instruction from memory.
2. Decode. Figure out what that instruction is supposed to be.
3. Execute. Actually do the hardware things that the decoded instruction says to do.
4. Write-back. Update memory or other components with the result of the just executed instruction.

Ahh, you observe an interesting thing. If you buffer the output of each block, the system as a whole can be clocked at much higher speed. This is because there’s just so much less stuff that needs to be done in each pipeline stage.

Each machine instruction now takes four clock cycles to retire. However, you now also have four instructions in flight at the same time. With the overall clock speed now much higher, you have a much faster processor.

Things are good. You get a promotion. This turns out to be premature as this picture is also broken.

The problem is that some (many) instruction sequence patterns have dependencies on what went on before it. Test-and-branch type instructions are classical trouble makers. You simply can’t assume that all instructions in flight should be execute...

Profile photo for Jay Trang

Not sure about pipelining, but the difference between hyper-threading (which I’ll refer to as multi-threading for simplicity) and parallelism is simply how many threads are involved and what each thread does.

With multi-threading, you may have a handful of threads and each thread would take in different tasks. With parallelism, you would have hundreds, or even thousands, or threads working at the same time and each thread is doing the same task together.

Think of the difference between a CPU and a GPU, where CPUs symbolizes multi-threading and GPUs symbolize parallelism. You’d have one set of pr

Not sure about pipelining, but the difference between hyper-threading (which I’ll refer to as multi-threading for simplicity) and parallelism is simply how many threads are involved and what each thread does.

With multi-threading, you may have a handful of threads and each thread would take in different tasks. With parallelism, you would have hundreds, or even thousands, or threads working at the same time and each thread is doing the same task together.

Think of the difference between a CPU and a GPU, where CPUs symbolizes multi-threading and GPUs symbolize parallelism. You’d have one set of programs and processes per thread in a CPU, but all of threads in a GPU are working together to run a video or game.

Profile photo for Axel Rietschin

From a software perspective there is virtually no difference.

Hyper-threaded cores are surfaced to applications as regular cores, as such a 4-core HT-capable chip with HT enabled will appear as 8 cores and is virtually indistinguishable from an actual 8-core chip without HT.

The performance characteristics of 4-core + HT vs. 8-cores is different, but there is no notable difference from an application standpoint.

Literally the only place in the system where the difference may matter is in the OS kernel and in particular the scheduler.

Profile photo for Quora User

Hyperthreading is Intel nomenclature for a technology called Simultaneous multithreading. SMT has actually been around since the 1960’s, but today it’s most common implementation is in the form of Intel’s hyperthreading.

Modern high performance microprocessors are known as superscalar processors.

Superscalar processors make use of multiple processing “lanes”, which can perform work . The idea is to be able to process multiple instructions every clock cycle, to improve performance. Something like Intel’s skylake architecture actually has 8 such lanes , with 2 of them being FP capable .

image from

Hyperthreading is Intel nomenclature for a technology called Simultaneous multithreading. SMT has actually been around since the 1960’s, but today it’s most common implementation is in the form of Intel’s hyperthreading.

Modern high performance microprocessors are known as superscalar processors.

Superscalar processors make use of multiple processing “lanes”, which can perform work . The idea is to be able to process multiple instructions every clock cycle, to improve performance. Something like Intel’s skylake architecture actually has 8 such lanes , with 2 of them being FP capable .

image from wikichip

This means that in a perfect case , a skylake core can potentially schedule 8 µops (instructions) per cycle , which is pretty fast .

In reality however, this throughput is rarely achieved. For starters, not all these lanes ( referred to as ports on the diagram) are equally capable, so depending on the instruction mix you might not achieve this maximum throughput. there are also many data dependencies that that mean some instructions cannot be executed simultaneously , along with many other reasons.

In practice, this means that some of these lanes are not filled and thus remain idle. This wastes potential performance and power. Modern processors will try to re-order instructions on the fly to fill these slots , but there are still idle execution resources in the end.

The question comes up : can we fill these free slots with other work ?

If a processor is designed with SMT in mind , it could potentially start pulling in instructions from other threads in an attempt to fill these slots. Instructions from different threads are generally independent , so there is a potential for parallelism there.

This is what Hyperthreading does : the processor tries to schedule instructions from one or more alternate threads to make use of idle execution ressources.

source in image. This chart tries to illustrate execution resource usage depending on the multi-threading scheme. We’re interested in the one on the right (simultaneous multi-threading).

SMT needs to be designed into the processor architecture from the get go however. Usually, SMT requires duplication of most if not all of the processor’s front end; including the program counter, branch unit, renaming, scheduling etc.

addendum: after re-reading this post after the fact; i realize i may have overemphasized the relation between SMT and superscalar architectures. While SMT can help increase pipeline occupancy (especially in an in-order design), it’s not the only situation where it provides a benefit. In general, being able to quickly switch focus to another thread without a context switch (as feasible with SMT) can allow the processor to hide various high latency situations, such as a cache miss. This is true for scalar, superscalar as well as in-order/OoO pipelines.

Profile photo for Quora User

Hyperthreading is Intel branding for a technology called Simultaneous multithreading. SMT is used by many different processor architectures, most notably in AMD’s competing Ryzen CPU line.

To understand what Hyperthreading/SMT achieves, think of a modern CPU as an electronic assembly line (which is referred to as the CPU pipeline).

image from MCA-PROCESS

Processor instructions go from one end to the other, passing through a series of steps, in lockstep. On a modern Intel CPU, an instruction might take 14 clock cycles to go from end to end, during which the instruction is fetched, decoded, execute

Hyperthreading is Intel branding for a technology called Simultaneous multithreading. SMT is used by many different processor architectures, most notably in AMD’s competing Ryzen CPU line.

To understand what Hyperthreading/SMT achieves, think of a modern CPU as an electronic assembly line (which is referred to as the CPU pipeline).

image from MCA-PROCESS

Processor instructions go from one end to the other, passing through a series of steps, in lockstep. On a modern Intel CPU, an instruction might take 14 clock cycles to go from end to end, during which the instruction is fetched, decoded, executed and retired. At every step, new instructions are inserted into the pipeline, meaning every instruction should have one just behind it, resulting in high resource utilization. Despite the 14 cycle latency, a high throughput is maintained (modern Intel chips are actually capable of retiring 4 instructions per clock).

But what i described here is the ideal scenario. The hardware designers want every possible slot being filled so that the processor remains busy at all times. In practice though, we have things called pipeline “bubbles”. Bubbles can occur for many different reasons, most notably because the processor can’t find any useful work to do.

But bubbles aren’t desirable. During a bubble, parts of a processor sit idle, doing nothing and consuming power. To stick to the assembly line analogy, it’s like we stop feeding the machines with material to work on, meaning they just operate on nothing.

Anyway, this is when Hyperthreading comes into play. As established, processor resources do go idle from time to time because they don’t have anything to do. Instead of letting those resources go to waste, why not use this downtime to work on other things instead? So that’s what Hyper-threading aims to do. Instead of just waiting around, the processor will try to run instructions from other threads (basically other programs) to try and fill up these available slots. This can boost effective throughput by as much as 30% on a modern desktop processor.

Do note that despite what task manager might say, Hyperthreading is not the same as doubling your core count. A processor core with hyperthreading will appear to your operating system as 2 separate logical cores, but no resources are being doubled here; they’re just being used more efficiently.

Here’s a more technical diagram as well:

https://courses.cs.washington.edu/courses/csep548/00sp/lectures/class4/sld058.htm

Profile photo for Sybille Ebert

What is multithreading, and why is it considered faster?

In its most basic sense, multithreading is about doing something else while you are waiting for something.

In other words, multithreading is a way of multitasking. You are probably well familiar with multitasking in your life. If you order a pizza, you are not going to stand frozen like a statue in front of the door while you are waiting for delivery. Maybe you go and wash the dishes, put the kids to sleep or something else. You are doing those things automatically and without even thinking about them.

Every task involves some kind of conte

What is multithreading, and why is it considered faster?

In its most basic sense, multithreading is about doing something else while you are waiting for something.

In other words, multithreading is a way of multitasking. You are probably well familiar with multitasking in your life. If you order a pizza, you are not going to stand frozen like a statue in front of the door while you are waiting for delivery. Maybe you go and wash the dishes, put the kids to sleep or something else. You are doing those things automatically and without even thinking about them.

Every task involves some kind of context. Most importantly, you need to remember where you left off, so that when you come back to it, you can continue from there and avoid starting from scratch. Our lives would be a disaster if we had to start everything from scratch every time.

For a processor, things are really not much different. When they are waiting for something (e.g. for a network operation to complete, for user move a mouse etc.), they are free and can do something useful. CPUs are extremely fast, so they can switch the context millions of times per second. Nevertheless, every context needs some memory to store the current state of a task such as active registers, location of last executed instruction etc. And even if switching is fast, it still takes some time. If there are a lot of context switches, this time can add up significantly.

This is why multithreading is not always faster. It’s good for problems that lend themselves to parallelization, which means that they can be broken down into activities that can run at the same time without blocking each other. On the other hand, sequential problems are not good for multithreading since every step depends on the result of previous step.

Exploiting parallelism in programs is a programming discipline. It’s a kind of art, since it requires thinking about several aspects of a program and finding a sweet spot that makes sense. Writing a multithreaded program requires you to think about concurrency, locking, synchronization etc. and those are all hard problems in itself. They are hard not because they are complicated but because you have to think about them mathematically, and that takes a certain skill to master. Not every programmer has the patience to learn this skill.

Bottom line is this:

  • If your problem can be parallelized and you use multithreading correctly, your program will run faster.
  • If you use multithreading on a sequential problem, your program will run slower.
  • If you don’t apply multithreading correctly, your program might behave unpredictably - it can freeze, crash or corrupt data in a nasty random pattern that is extremely difficult to diagnose and fix.
Profile photo for Keith Mayhew

Yes, Hyper-Threading

is Intel’s branding for their implementation of Simultaneous Multi-Threading (SMT) which is a specific type of hardware level multi-threading that has a long history.

While Intel had the first commercial SMT implementation other vendors also implemented it. It is a technique to get parallelism out of a super-scalar processor’s resources and presents it as mutliple logical (or virtual) CPU cores to an OS.

Thus while it is a form of multi-threading at the hardware level it does not correspond to software threads at an OS level.

Also the logical CPUs won’t be as fast as compl

Footnotes

Yes, Hyper-Threading

is Intel’s branding for their implementation of Simultaneous Multi-Threading (SMT) which is a specific type of hardware level multi-threading that has a long history.

While Intel had the first commercial SMT implementation other vendors also implemented it. It is a technique to get parallelism out of a super-scalar processor’s resources and presents it as mutliple logical (or virtual) CPU cores to an OS.

Thus while it is a form of multi-threading at the hardware level it does not correspond to software threads at an OS level.

Also the logical CPUs won’t be as fast as completely independent CPU cores so, as they still share resources, an OS needs to be aware to make effective scheduling decisions.

Whether SMT will continue to be used in the future, it is not clear due to security concerns

with some vendor/OS support being dropped for this reason.

Footnotes

Profile photo for Quora User

Official name for core which has multiple threads is SMT - Simultaneous Multi Threading. Intel name is HT or Hyper Threading.
So, functionally there is no difference only in name.

SMT is very popular in almost all x86 CPUs but not in ARM where only 1 core supports it.
In x86 SMT has 2 threads while IBM Power CPUs have up to 8 threads per physical core.

In general adding SMT to a superscalar core is not a big deal, it uses relatively few transistors. But ARM cores are much simpler than x86 and adding SMT will be more impactful.
Difference in speed with SMT and without depends on program but in a

Official name for core which has multiple threads is SMT - Simultaneous Multi Threading. Intel name is HT or Hyper Threading.
So, functionally there is no difference only in name.

SMT is very popular in almost all x86 CPUs but not in ARM where only 1 core supports it.
In x86 SMT has 2 threads while IBM Power CPUs have up to 8 threads per physical core.

In general adding SMT to a superscalar core is not a big deal, it uses relatively few transistors. But ARM cores are much simpler than x86 and adding SMT will be more impactful.
Difference in speed with SMT and without depends on program but in average is about 30%. Some programs even suffer while other benefit more or less.

Profile photo for Tadeusz Liszka

What is hyperthreading, and why is it better? Why do some modern day processors not have it?

Those are three separate questions and the 2nd one is malformed (better than what ?) I'll focus on the last question.

Hyperthreading (HT) allows a single processor (single core) to behave like it has two independent cores. But not fully, depending on the type of operations performed by the program. For a typical office use it may be almost as fast like two cores, for some other programs it may be only marginal gain, and sometimes even slower. Reasonable analogy may be a worker in the production line whic

What is hyperthreading, and why is it better? Why do some modern day processors not have it?

Those are three separate questions and the 2nd one is malformed (better than what ?) I'll focus on the last question.

Hyperthreading (HT) allows a single processor (single core) to behave like it has two independent cores. But not fully, depending on the type of operations performed by the program. For a typical office use it may be almost as fast like two cores, for some other programs it may be only marginal gain, and sometimes even slower. Reasonable analogy may be a worker in the production line which uses a hammer, screwdriver and an expensive drill. Obviously not all toools are used at the same time, so the owner can hire another worker, give him a hammer and screwdriver, but ask workers to share the drill. Now he can expect doubling the production speed for processes which do not use much of a drill, or - on the other end - small slowing down for products which require a lot of drilling (slowing down because the idle worker bothers the other one from time to time). Typically HT processor costs only a little more than no HT version, so obviously very often it makes sense to use it. But there are applications which do not profit from HT, so CPUs designed for that type of jobs can be built without HT capability.

There are two types of hyperthreading. Intel calls their technology hyperthreading, AMD calls it integer cores. Worker analogy described above represents AMD cpu, while Intel workers share not only drill, but also a hammer. More pieces are shared, but in both cases the FPU (floating processing unit - the chunk which performs non-integer arithmetic) is shared between 2 logical cores. For most scientific/engineering applications, FPUs are very heavily used. Naturally, for wordprocessing or internet browsing, FPU is idle. So it makes sense that for building supercomputers it is cheaper to not use HT-capable processors.

Profile photo for Lawrence Stewart

Second thing first, hyperthreading is not like virtual memory.

Hyperthreading is what Intel calls this idea. AMD calls it SMT or simultaneous multi-threading.

To understand, first we’ll talk about what is a thread.

A thread is an address space (think contents of virtual memory) and a set of architectural registers (Like AX, BX, IP, SP, etc). The term comes from “thread of execution”.

The abstract idea of execution of a thread is that the processor fetches the next instruction from the address space according to the address in the IP (instruction pointer) register. Then the processor does whatever

Second thing first, hyperthreading is not like virtual memory.

Hyperthreading is what Intel calls this idea. AMD calls it SMT or simultaneous multi-threading.

To understand, first we’ll talk about what is a thread.

A thread is an address space (think contents of virtual memory) and a set of architectural registers (Like AX, BX, IP, SP, etc). The term comes from “thread of execution”.

The abstract idea of execution of a thread is that the processor fetches the next instruction from the address space according to the address in the IP (instruction pointer) register. Then the processor does whatever the instruction says, like ADD BX,AX or MOV AX,$1000 or whatever. Then the processor advances the IP to the next instruction.

With one processor, this is all there is, and that is the way things were in the early 1960s.

Next, designers realized they could have multiple CPUs connected to the same memory, and this leads to the idea of multiprocessors and coherent memory. Still no real confusion, because each processor has the resources to run one thread, registers, and so on.

Next, in an effort to make CPUs faster, designers made the CPUs pipelined, so that several instructions were in various stages of execution at once. The simple version is “instruction fetch, decode, execute, write results”, but matters are much more complicated because the “execute” step is often broken up into smaller steps.

Next, designers found ways around the problem that if an instruction computes a result, and a previous instruction still needs the old value it gums up the works. This was solved by “register renaming”, in which a pool of registers are assigned to play the role of the architectural registers, and there could be many different copies of AX around at any given moment. This all settles out when the pipeline drains.

At this point, processors were getting faster faster than memory was getting faster, so a memory operation could take dozens to hundreds of cycles. That can stall progress and the CPU is sitting around idle.

Now we can talk about hyperthreading.

The idea is that a single physical core can execute instructions from multiple threads. The register allocator just makes sure that they are kept separate. When one thread stalls because it is waiting for memory, the core can continue to execute instructions from the other thread.

As long as the core has enough resources (physical registers and so forth) to manage the state of two or more threads, their instructions can execute in an interleaved manner “simultaneously”. It is clever.

For some memory-bound programs, you can get almost twice the performance. For compute bound programs it may even slow down. Mostly there is some benefit, maybe on average 50% more throughput for two hyperthreads sharing a core, but much less than 50% extra cost.

Down inside the core, you can think of the scheduling hardware as having available instructions from two or more threads, and a set of tally boards showing who is using what machine resources for what, such as registers, compute units (add, multiple, and so forth), outstanding memory operations, and so forth.

The scheduler compares the resource requirements of the next dozen or so instructions for each thread with the dependencies of each (this instruction depends on a result calculated by that instruction), and what resources are available (I have a floating point multiplier free, what can I do with it?)

The scheduler releases instructions - perhaps even out of order or “speculatively” to be executed.

The net result of this sort of out-of-order, speculative, hyperthreaded core, is a great deal of speed, but a sort of indeterminacy. You don’t know an order of events at the smallest scales. For x86, the rule is that cores appear to run instructions in the order given by the program, but down inside it is kind of insane. There is a whole other level of madness about the rules necessary to make sure that changes made to memory appear to occur in the order given by the program.

Sometimes the underlying indeterminacy leaks through in ways that can be statistically estimated by programs, this is what lead to the meltdown and spectre attacks a couple of years ago.

Meltdown and Spectre
Most certainly, yes. Probably not. The exploitation does not leave any traces in traditional log files. While possible in theory, this is unlikely in practice. Unlike usual malware, Meltdown and Spectre are hard to distinguish from regular benign applications. However, your antivirus may detect malware which uses the attacks by comparing binaries after they become known. If your system is affected, our proof-of-concept exploit can read the memory content of your computer. This may include passwords and sensitive data stored on the system. We don't know. There are patches against Meltdown for Linux ( KPTI (formerly KAISER) ), Windows, and OS X. There is also work to harden software against future exploitation of Spectre, respectively to patch software after exploitation through Spectre ( LLVM patch , MSVC , ARM speculation barrier header ). Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether AMD processors are also affected by Meltdown. According to ARM , some of their processors are also affected. Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors. Cloud providers which use Intel CPUs and Xen PV as virtualization without having patches applied. Furthermore, cloud providers without real hardware virtualization, relying on containers that share one kernel, such as Docker, LXC, or OpenVZ are affected. Meltdown breaks the mechanism that keeps applications from accessing arbitrary system memory. Consequently, applications can access system memory. Spectre tricks other applications into accessing arbitrary locations in their memory. Both attacks use side channels to obtain the information from the accessed memory location. For a more technical discussion we refer to the papers ( Meltdown and Spectre ) The vulnerability basically melts security boundaries which are normally enforced by the hardware. The name is based on the root cause, speculative execution. As it is not easy to fix, it will haunt us for quite some time. Yes, there is an academic paper and a blog post about Meltdown, and an academic paper about Spectre. Furthermore, there is a Google Project Zero blog entry about both attacks. CVE-2017-5753 and CVE-2017-5715 are the official references to Spectre. CVE is the Standard for Information Security Vulnerability Names maintained by MITRE. CVE-201
Profile photo for Philip Cameron

pro for single thread is it is easier and quicker to write and debug.

con for single thread program doesn’t get faster on a multiple cpu system.

pro for multi thread. It executes on multiple cpus. This permits it to distribute the work among a number of threads which permits it to scale to handle more transactions/requests. It often runs faster.

con for multi thread. It can be very difficult to debug. You have to carefully lock access to data that is shared among the threads while one thread is using it. (You don’t lock code sequences, you lock data access) You can’t access the data in any way wi

pro for single thread is it is easier and quicker to write and debug.

con for single thread program doesn’t get faster on a multiple cpu system.

pro for multi thread. It executes on multiple cpus. This permits it to distribute the work among a number of threads which permits it to scale to handle more transactions/requests. It often runs faster.

con for multi thread. It can be very difficult to debug. You have to carefully lock access to data that is shared among the threads while one thread is using it. (You don’t lock code sequences, you lock data access) You can’t access the data in any way without holding the lock. You can’t assume a thread will run at any particular time, or how far it will get when it does run. You have to be careful about sequenced operations especially if things need to execute in a particular order.

Threads typically run until they encounter a lock. When the lock is released, all the blocked threads are unblocked at the same time. At some point a number of them will be assigned to CPUs and start executing. (It may be a while before any of the threads are assigned a CPU). One of them will get the lock and the rest will block again. You never know who will win the lock. So you need to write the code so that it doesn’t matter who wins.

NOTE: The linux kernel is multi thread. As are the drivers, modules, etc.

Profile photo for Jeff Drobman

any CPU type SoC may have multiple cores of CPU, GPU, NPU, etc. CPU cores have 1 or more threads. when 2 or more threads, that is called “multi-threading” (MT). “Hyper-threading” is Intel’s own implementation of MT, of a form called “SMT” for “simultaneous” MT.

Profile photo for Quora User

Today when dealing with CPUs we talk about:

  • Core - Today most CPUs have at least 2 cores but even mobile phones have 4 to 8 cores while desktop are at 16. Server CPUs have up to 64 per CPU.
  • Thread - Program execution path, Each running program has at least one thread. In technical words thread is Program/Instruction Counter inside a core telling it which program instruction to execute next

ARM core is capable of executing only one thread but x86 core (for last 20 years) is capable of executing two threads simultaneously. This means x86 core will run two programs in parallel while ARM core will ha

Today when dealing with CPUs we talk about:

  • Core - Today most CPUs have at least 2 cores but even mobile phones have 4 to 8 cores while desktop are at 16. Server CPUs have up to 64 per CPU.
  • Thread - Program execution path, Each running program has at least one thread. In technical words thread is Program/Instruction Counter inside a core telling it which program instruction to execute next

ARM core is capable of executing only one thread but x86 core (for last 20 years) is capable of executing two threads simultaneously. This means x86 core will run two programs in parallel while ARM core will have to switch between programs and as such will be slower (let’s say 50% slower).

Multithreading means executing multiple programs (threads) in parallel, simultaneously. Multithreading could be either (as in ARM case) using multiple cores or in x86 case - SMT - Simultaneous Multi Threading. IBM PowerPC CPUs have up to 8 threads per core.

Speaking of x86 SMT, it gives cca 20 - 50% speed improvement but improvement depends a lot on load, programs running. In some cases SMT even slows down execution.

Reason why x86 has SMT is in its complexity. x86 cores are very complex and adding SMT does not cost many transistors. Spending, let’s say 10 million transistors, on SMT makes sense cause we get some improvements. In ARM case it makes not so much sense cause ARM cores are simpler. But there is one ARM core with SMT used in automotive. It is better to add another core and get up to 100% improvements than implementing SMT to gain some 20%.

Anyway, multithreading. Benchmarking CPU has two results, single-threaded and multi-threaded. Single-threaded is real core speed and all CPUs reach highest operating frequencies in single-threaded mode. Multi-threaded depends on number of cores but improvements are not linear - running 2 cores will not gain 100% speed improvement. Well, depends on tasks.

Amdahl's law is talking about multi-threaded improvements (not SMT) and it looks like:

For example games are single-threaded mostly where additional threads are used as service tasks and as such benefit from higher CPU clock - Intel always wins as fastest gaming CPU.

Most programs benefit a bit from muti-core/threaded CPU (very depends on program). For example famous Cinebench which is real multi-threaded benchmark:

Threadripper has double cores of 12900 and 3950 and only gains 20%.

Not only has twice as much cores but also has quad channel memory compared to dual channel in others.

Specially written programs for multicore CPUs will benefit a lot but most programs we are using do not.

Cause of technological limitations CPU clocks have not passed 5.5 GHz and with newer nodes, eg 5nm, it struggles to reach 3.5 - 4 GHz. So, only way to get more performance is adding more cores. Improved architecture gains some 10% but real game changer is number of cores. Of course, most users benefit something, eg web browser will render page in one thread while other threads will be used for loading images from web site.

Above Cinebench result is nice example. Intel 7980 has two more cores but is slower than 5950 and 12900 - architectural improvements in 5 years gained only 15%.

Profile photo for Quora User

Hyperthreading is Intel name for technique called SMT - simultaneous multithreading.
SMT is nothing new and Intel Core CPU started many years ago. x86 CPUs have 2-way SMT (2 threads per core) but there are CPUs with 8 threads (IBM PowerPC).
But on ARM SMT is something completely new and not used at all. ARM Cortex A65 is first core with SMT and is used for automotive. If Apple is making best ARM CPUs they also do not have, and probably do not plan, to have SMT.

Every modern core converts CPU instructions programs are using into several internal instructions called micro operations (uOp) or mini

Hyperthreading is Intel name for technique called SMT - simultaneous multithreading.
SMT is nothing new and Intel Core CPU started many years ago. x86 CPUs have 2-way SMT (2 threads per core) but there are CPUs with 8 threads (IBM PowerPC).
But on ARM SMT is something completely new and not used at all. ARM Cortex A65 is first core with SMT and is used for automotive. If Apple is making best ARM CPUs they also do not have, and probably do not plan, to have SMT.

Every modern core converts CPU instructions programs are using into several internal instructions called micro operations (uOp) or mini Op. uOps are placed into OoO (out of order) buffer and each one is executed in EU (execution unit). Such approach enables executing multiple instructions at same time and is reason for IPC > 1 (instruction per clock). CPUs are also clever and they internally reorder uOp execution to gain even more performance (OoO buffer).

Looking at CPU executing programs becomes visible how core is not 100% utilized. It is not possible to have program constantly keeping CPU 100% utilized. And now comes SMT into play. Cause CPU instructions are split into uOp all needed to get multiple threads (of execution) is additional set of CPU registers and some minimal logic. Instructions are read from two different addresses (two threads, each having its own PC - program counter), decoded (converted into uOp) and scheduled for execution. Each instruction carries info about registers (threads) and EUs do not care where uOp is belonging.

Final result is relatively cheap implementation (not too many transistors) giving 20 - 50% speed improvements. SMT is not giving 100% speed improvement!
How much is speed improved depends on programs. In most normal applications that’s cca 20 - 30%. Better something then nothing.

But SMT comes with set of problems - vulnerabilities. Many advices was to disable SMT to mitigate some problems.

Cause most ARM cores are simpler then x86, SMT makes not much sense. Implementing additional core is better and cost is somewhat more transistors.

About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025