The Buggy Ball Explained – One Piece Technology

Morning and welcome back to another video and today we are gonna be talking about the buggy ball let’s go okay so the buggy ball is on of those ones that was showcased ages and ages ago back in the East blue Sun saga and this was used by buggy the clown and the that it’s kind of like a cannon.

Ball that can just do devastating amounts of damage now obviously buggy pronounced to everyone that this thing he would be able to conquer the grand line with these buggy balls obviously the buggy balls actually have.

Actually been shown further on like in Mereen ford buggy actually had a tiny buggy ball which he was able to make a.

Basically it’s just a very small or medium sized cannon ball and you can actually just fire it and it will just do insanely more damage than regular cannon ball would do now personally I do find that this this is quite nice to even talk about because at the end of the day after marine forg there’s like no mention of the buggy ball and I like the pokeball I really.

Want the buggy was to come back I don’t know why they’re basically a red cannonball with the buggy the clown Oh Jolly Roger on and the devastating explosion that they can actually create is insane like I don’t think I think buggy was bit overly optimistic when he said.

He could conquer the ground line with these but he can fairly take over a nice portion of paradise with him especially with his crew now as he is a sheep a guy but.

With the buggy balls hmm not many upstart pirates would be able to go up.

Against him I do reckon obviously.

The new old pirates would be able to kick his ass still but paradise he probably wouldn’t have much of a problem with the buckyball still in use there isn’t much else to go on the bible’s I do think we may see an added adapted version of the pokeball but.

It all honesty I just want to see.

The buggy ballgame because they just said funny and they make like nice little comic relief’s like in Rifa but the little one they’re like a boom it’s just funny it’s just funny when they’re actually being used I.

Think it’s more or less buggy using that pup using these buggy balls to actually fight.

Um but then it’s just funny I don’t know I don’t know it’s just one of those ones where it’s just funny as hell man so at the end of the day you the buggy ball is one of those ones where I really do would like to see this in one piece again just even for a bit of comic relief like I don’t know buggy mails one to.

Himself that would be quite funny blow up muggy.

Balls are another form of buggy ball and that is one of the ones that was the one that I was talking about the really small.

One that makes a massive explosion they’re.

Called monkey balls instead of buggy balls but they’re still the same thing but just downsized version of the bug people but that being the case that is a video on the buckyball have a lovely rest of the day guys and I shall see in the next video tomorrow bye bye.

Wi-fi 6 802.11 Ax || Latest Wifi Technology Explained || Wifi Names & Standards || Digital Inspires

You Wi-Fi six has been trending topic ever since its proposal in this video we are going to give a elaborate description of Wi-Fi six and its uses Wi-Fi six is based on the I Triple E 802 dot eleven acts standard and enables next generation Wi-Fi connectivity Wi-Fi six will provide the capacity coverage and performance required by users even.

In dense environments such as stadiums and other public venues Wi-Fi six networks enable lower battery consumption and Wi-Fi six devices making.

It a solid choice for any environment including smart home and IOT uses key benefits of Wi-Fi six technology include higher data rates increased capacity good performance and dense environments.

Improved power efficiency Wi-Fi six provides the foundation.

For a host of existing and emerging uses from streaming ultra high-definition movies at home or on the go to.

Mission critical business applications requiring high bandwidth and low latency to staying connected and productive while traversing large congested networks and airports and train stations the ubiquity of Wi-Fi and its ability to complement other wireless technologies helps bring the promise of constant connectivity closer to reality it also creates very diverse.

And densely populated Wi-Fi environments requiring technological advances to meet the needs of users Wi-Fi 6 brings improvements to current features.

Of Wi-Fi as well as new features that provide additional benefits new capabilities include uplink and downlink orthogonal frequency-division multiple access Optima increases efficiency and lowers latency for high demand environments 1024 quadrature amplitude modulation mode 1024 comm enables peak gigabit speeds for emerging bandwidth intensive use cases improved medium access.c control signaling increases throughput in capacity while reducing latency increased symbol durations make outdoor.

Network operations more robust upgrading to Wi-Fi 6.

Technology will bring enhanced performance to users in smart home environments supporting.

Multiple Internet of Things IOT devices per user as well as to businesses and operators running large-scale deployments Wi-Fi 6 brings more capabilities to support generation connectivity uses.

The Best Technologies That Will Drive Your Tablet In 2013

The Best Technologies That Will Drive Your Tablet In 2013 2013 07 01 06This year, the tablet market will become saturated with a variety of CPUs designed specifically for tablets. Many major computer manufacturers unveiled all manner of touchscreen devices sporting such chips, with dramatic increases in performance and battery life.

Despite what you’ve been told, all these CPUs are good; offering varying degrees of battery efficiency, processing power, graphics and cost. This article covers the latest and best of these tablet chips scheduled for release in 2013. Before getting into what’s available, it’s important to explain some of the biggest differences between the main technologies out there: ARM and x86.

What’s the Difference Between ARM and x86?

641px-ARM_powered_Badge.svg

Two instruction set architectures will compete for market share in 2013 and on – ARM and x86. Consumers need only know that ARM powers Android, iOS and Windows RT (not to be confused with actual Windows) operating systems; whereas x86 provides the tech behind Windows 8 and Android (Android is compatible with both x86 and ARM architectures). For most users, the operating system will determine whether or not they purchase the device, since many will already have made purchases in a particular app store.

  • ARM: You may have heard of the acronym “ARM” before – it stands for Advanced RISC Machine, produced by ARM Holdings. The architecture powers the vast majority of handsets and tablets on today’s market, whereas x86 dominates the desktop and server markets. ARM chips generally provide extremely good low wattage efficiency, which makes them ideal for mobile devices.
  • x86: x86 is synonymous with chipsets designed by Intel and AMD (and Via). Historically, x86 chipsets edge toward possessing power at the expense of wattage consumption, particularly at lower frequencies.

What To Expect From Manufacturers In 2013?

“Temash” and “Kabini” by AMD

AMD logo

AMD’s newest lines of tablet chipsets integrate the graphics processor within the CPU, resulting in what AMD refers to as an “Accelerated Processing Unit” or APU, which Matt explained here. After initial teething troubles, the technology quickly developed into the backbone of next generation console gaming, powering both the upcoming Xbox One and Playstation 4. The APU provides a relatively decent processor with very good gaming graphics at low wattage. There are two new lines of APU, codenamed “Temash” and “Kabini” being released in 2013, dedicated for mobile devices:

Temash: Also known as “Elite Mobility”, the Temash x86 line will focus on tablet, notebook and hybrid tablet/notebook designs. Out of the two new lines, Temash will features lower heat production and power consumption at the expense of processing power, relative to Kabini. While Temash features relatively weak CPUs, their strong graphics capabilities more than make up for its processing shortcomings.

Temash will be used in Quanta, a reference design in the video below. The early interviews indicate a $300/€300 price point. From the video below, you can see that Temash’s APU runs Windows 8 very quickly.

http://www.youtube.com/watch?v=b0ojipXo1wo

Kabini: Kabini takes aim at the higher-powered portion of the mobile market, primarily dealing with notebooks. While some convertible tablets might include Kabini processors, this particular line will not show up on tablets because of its relatively high wattage consumption.

2013-07-03_23h08_45

In a nutshell: For inexpensive tablets, the chipset to look out for is Temash (also known as Elite Mobility). AMD’s upcoming tablet line will likely have excellent graphics for mobile gaming on the Windows 8 and Android platforms. On the downside, their wattage consumption will exceed that of ARM-based systems by a sizable margin, resulting in overall poorer battery life.

Tegra 4 by NVIDIA

2013-06-30_23h57_57

NVIDIA’s breakthrough platform, the Tegra 3, received a major update in Tegra 4. Tegra 4 features a powerful graphics processor alongside a quad-core ARM processor. The Tegra line has excellent battery endurance, as well, using a design similar to ARM’s big.LITTLE concept, which pairs an efficient and a powerful processor together. In line with this paradigm, Tegra uses a specialized second high efficiency processor for handling idle-state background operations. Consequently, Tegra-based tablets tend to drain very little when not actively being used.

Below is a video clip of Tegra 4 in action. The graphical performance appears similar to modern consoles, except in a tablet form factor.

The Nvidia Shield is a portable hand-held gaming console is powered by Tegra 4 and costs $299 (which we will be reviewing).

2013-07-03_23h10_11

In a nutshell: Tegra 4 is a great ARM-based CPU with excellent graphics capabilities and good battery endurance. It’s also included in devices priced at less than $300, making it relatively affordable.

Snapdragons 600 and 800 by Qualcomm

qualcomm

Qualcomm’s latest ARM-based designs fit in both smartphones and tablets. The latest benchmarks show it to have both excellent performance and battery endurance.

Snapdragon 800: The Snapdragon 800 performed very well in early benchmarks, showing extremely good performance per watt. Although it loses to Tegra 4 in benchmarks, it’s still quite fast and its battery endurance will likely be superior to Tegra 4.

2013-07-03_23h12_01

Snapdragon 600: The Snapdragon 600 powers the HTC One. For the most part, it represents an improvement over the previous Qualcomm S4 Pro. Currently, the Snapdragon 600 is rumored to power the next generation of Nexus 7 tablets, which will release sometime later this year.

In a nutshell: The Snapdragon series of CPUs is going to do some real damage as it is low cost, high performance and very power efficient. If you’re buying an Android or Windows RT system, try to find one powered by Snapdragon 800.

“Haswell”, “Clover Trail+” and “Bay Trail” by Intel

intel logo

Intel’s first low wattage CPU, the Atom,  entered the mobile game late with its “Medfield” system-on-a-chip, which focused on low wattage. Unfortunately, its performance was not particularly stellar. Since then, Intel released incremental refinements of its Atom architecture, resulting in its codenamed “Bay Trail” series. Additionally, Intel has added a number of graphical and efficiency improvements to its main series of CPU, codenamed “Haswell”, making it an ideal fit for the tablet market.

Haswell: Haswell is Intel’s latest revision of its Intel Core CPU line, emphasizing power efficiency and graphical capabilities. The first line of Haswell powered tablets, unfortunately, will use fans to cool the CPU, thus impairing battery longevity. The first fanless Haswell powered tablet will go on sale toward the end of 2013, manufactured by HP. On the downside, the overall price of Haswell equipped computers will exceed $500. This unfortunately prices Haswell well away from the ARM-based market. Upcoming Haswell tablets:

  • HP: HP plans a passively cooled Haswell tablet for release in late 2013. Passively cooled designs (no fans) have substantially better battery endurance than devices that use fans.
  • ASUS: ASUS put together some interesting tablet designs, incorporating Haswell. In particular, their Android-Windows 8 hybrid.
  • Toshiba: Toshiba has several upcoming Haswell-equipped tablets.

2013-07-03_23h14_20

Clover Trail+: Clover Trail+ was released in mid 2013 and while its CPU surpassed many of its ARM based competitors at the time of its release, its graphics capabilities and overall performance wasn’t particularly jaw-breaking. Additionally, hardware powered by Clover Trail+ tended to be overpriced.

Bay Trail: Bay Trail uses Intel’s 3D transistor technology, which improves power efficiency. It’s also the first time an x86 quad core CPU has been specifically designed for tablets. The new CPU will likely feature hyperthreading, meaning it will run eight simultaneous threads. Additionally, Bay Trail will start at around $200. Some early benchmarks show it to outperform Qualcomm’s Snapdragon 800.

In a nutshell: Given the extremely good performance of Haswell and the likely low cost and good battery endurance of Bay Trail, Intel is going to be hard to beat in the 3rd quarter of 2013. If you intend on buying a Windows 8, or Android, tablet, look for Haswell for performance and Bay Trail for battery endurance and price.

Exynos Octa by Samsung

samsung

Samsung has built up a track record of excellence regarding its Samsung Galaxy Tab series. Its latest, cutting edge CPU is a shocking eight cores, on two separate CPUs.

2013-07-03_23h29_02

Exynos Octa: Samsung’s latest eight core CPU, designed for both tablets and smartphones, uses the big.LITTLE design concept, which pairs a four core processor with another four core, high efficiency processor. On tablets, it only has one rumored application:

  • Nexus 11: The Octa chip is rumored to be included on the upcoming Nexus 11.

In a nutshell: Samsung’s Octa core CPU is a strong contender, comparing well to the latest Snapdragon chips. Unfortunately, at this point we only know that it’s being used in the Galaxy S4 as Samsung hasn’t yet announced it yet for any tablets. In current benchmarks, it’s roughly comparable to the Snapdragon 600.

Conclusion

For the vast majority of users, the operating system will determine which CPU you choose. For ARM, the early reviews show Tegra 4 possessing an edge over all other similar processors. On the other hand, Qualcomm’s Snapdragon 800 processor will likely provide better wattage consumption than the Tegra 4.

For x86, many of the early Haswell-powered tablets and convertible devices show strong performance and good battery consumption – at a high cost. Virtually all of the early Haswell products show a price range well above $500. On the other hand, many of Bay Trail estimates show a price range starting at around $200. In comparison, AMD competes in terms of price and graphics, in which case it handily beats Bay Trail and potentially Haswell.

For those of you obsessed with measuring GPU and CPU powers, check out Matt’s awesome review of GPU Boss. The site does a great deal of match-up analysis between upcoming GPUs.

Image credits: Woman using digital tablet PC in the park via Shutterstock; CPU images from their respective manufacturers’ websites.

4 Ways Drones Will Actually Benefit Your Day-to-day Life

Though the concept of a drone vehicle has been around for several decades — mainly in the context of military and war — it wasn’t until 2014 that consumer interest in drones really exploded. Now, people are coming up with all kinds of practical uses that are set to change our everyday lives.

Drone applications can often feel too theoretical and “out there”. We know that drones have many incredible uses for the future and that they will soon revolutionize entire industries, but what about for you and me? What kind of tangible benefits can we look forward to?

Several, in fact. We may still be in the infant stage of consumer drone technology, but it won’t be long before we reach that point when we’re wondering how we ever lived without drones in our lives.

1. Receive Package Deliveries by Air

For those who weren’t aware, you should know that Amazon has long been working on a drone-based delivery system for its packages. The ultimate goal of this system, which is called Amazon Prime Air, is to fly individual packages out to customers within 30 minutes of ordering.

If they manage to make this a reality, it would end up being one of the best reasons to subscribe to Amazon Prime — on top of all the other amazing benefits you can already take advantage of. The good news is that Amazon recently announced the ability to carry a five-pound package over 10 miles in under 30 minutes.

In other words, this concept of drone-based deliveries is more than just a pipe dream. And it’s more than just a publicity stunt. Amazon is truly working towards making it happen, and the online retailer has made a lot of real progress in just a few years. There are still a handful of challenges and obstacles to overcome, but we’re staying optimistic.

What’s really great about Amazon’s initiative is that it will pave the way for other services to start using drones for personal on-demand deliveries. Imagine new local startups that could deliver pizzas, beer, or even medicine to you in the blink of an eye.

That last point is an interesting one. Consider, for example, if you had a lethal allergic reaction. Instead of driving through traffic or waiting for an ambulance, maybe a special healthcare service would fly an epi-pen to your location within minutes.

2. Improve Surveillance and Security

Camera plus drone equals infinite possibilities. It was one of the very first ideas that people had when consumer drones exploded in popularity, and people are still coming up with novel applications for aerial photography to this day.

Some of the results are beyond spectacular — so much so that you have to see them to believe them. Drone cameras have been able to capture footage from angles and distances that were pretty much impossible until now. We’re on the verge of a new era in photography.

But even if you aren’t creative, drone cameras have other uses that you may find indispensable.

Think about photographic evidence. Dashcams were once considered unnecessary, but there are many stories of how those same dashcams have come in handy for all kinds of drivers. With the right kind of dashcam, you could be protecting yourself.

Not that you should use a drone as a dashcam replacement, but just as dashcam footage can help resolve legal issues, so can footage from drones. In a bad neighborhood, for example, you could use a drone to record a shootout or a break-in without putting yourself in harm’s way.

Municipally speaking, drones could lead to safer streets through more dynamic public surveillance. Obviously this would raise concerns over personal privacy, but done properly, no one can deny that it would likely put a noticeable dent in rates of violent crime.

On a more personal level, you could employ a handful of small drones to keep watch over your own property. Technology already exists that lets you set patrol paths for drones using GPS-based waypoints in 3D. This could be a possible next step for smart home security.

3. Help Decide Where You Want to Live

It’s no secret that many organizations are trying to use drones to better understand the atmosphere and the environments in which we live. In 2014, China vowed to fight pollution with drones, and in 2015, drones were used to measure pollution in the Peruvian Andes.

On a more local scale, however, you may soon be able to gather data about air quality and conditions using your own personal drone. Send it up into the air and grab readings on pollen count, pollution levels, and even more mundane things like temperature and humidity.

In short, drones could help monitor the air quality of a potential new residence, thus informing you on whether you should go through with the move (or whether you should leave your current residence).

Speaking of residences, drone cameras — which we mentioned earlier — can also help you make the most informed decisions when choosing where to live. I know first-hand how frustrating it is to hunt for a new apartment, mainly because the photos are rarely true-to-life.

Now imagine if real estate ads included a fly-through of the entire place using a miniature drone. You’d get to see exactly what the apartment looked like, and you wouldn’t be duped by manipulated photos that can make a place appear very different than it actually is.

4. Better Internet and Civil Infrastructure

In 2015, Facebook announced that it had successfully created a solar-powered drone that could theoretically stay between 60,000 and 90,000 feet in the sky for up to three months at a time. Each drone would have a ground reach radius of about 50 miles.

With thousands of these drones working together, we’d end up with a network that could potentially provide Internet access to the entire world — and this could even be the first step that eventually enables us to end the stranglehold that companies like Comcast have on America’s Internet.

But global drone-provided Internet is still many years off into the future, so what kind of infrastructural benefits can we reasonably expect in the next few years?

A major one will be improved maintenance and upkeep for roads and bridges. Think about it: when a road needs to be examined for safety and integrity, an actual person needs to drive out there in person. Sometimes, the examination even requires the shutting down of lanes or roads. It’s not as efficient as it could be.

Bridges are even worse. In 2013, at least one out of every 10 bridges in the United States needed serious repairs or replacements, and even more bridges were structurally deficient. With drones, these problems could be detected faster at less cost and improve safety for everyone.

Are You Going to Get a Drone?

At this time, drones still fall somewhere between “luxury gimmick” and “recreational fun only,” but the tipping point is just around the corner, and when it does, you’ll want to be ready for it. That’s why you should start preparing now.

Thinking of buying your first drone? Here are a few important questions to ask yourself so you don’t make any mistakes or end up wasting money unnecessarily. Also, you should be aware of these common drone-related security issues just in case.

What do you think of drones? Are you excited or unimpressed? Will you dive into it immediately or wait a few years and see how it pans out first? Tell us your thoughts down in the comments below!

A Quick & Dirty Guide To Ram: What You Need To Know

RAM is the short-term memory of your computer. It’s where your computer keeps track of the programs and data you’re using right now. You probably already know that more RAM is better, so maybe you’re looking to install more RAM now.

Shopping for RAM can be super confusing, though. What’s the difference between DDR3 and DDR4? DIMM and SO-DIMM? What’s the difference between DDR3-1600 and PC3-12800? Are latency and timing important?

In this article, we explain the different kinds of RAM and what distinguishes them.

A Crash Course on RAM

RAM acts as a middle ground between the small, super-fast cache in your CPU and the large, super-slow storage of your hard drive or SSD. It’s used to temporarily store working parts of the operating system and data being actively used by applications. It’s not used for permanent storage.

If you think of your computer like an office, the hard drive is the filing cabinet in the corner, the RAM is like an entire office workstation, and the CPU cache is like the actual working area where you’re actively work on a document.

desk-analogy

The more RAM you have, the more things you can have quick access to at any one time, just like having a bigger desk which can have more bits of paper on it without becoming messy and unwieldy (and requiring more trips back to the filing cabinet to reorganize).

Unlike an office desk, however, RAM can’t act as permanent storage. Its contents are lost as soon as power is lost, such as when you turn your computer off. Losing power is like wiping the workstation clean.

RAM Usually Means SDRAM

Synchronous Dynamic RAM (SDRAM) is typically what people mean when they’re talking about RAM for computers, and it’s what we mean when we discuss RAM in the rest of this article. For most desktops and laptops, it appears as sticks that can be inserted into the motherboard.

Unfortunately, there’s a rising trend for super thin and light laptops to have the RAM soldered directly to the motherboard in the interest of saving space, but this sacrifices upgradability and repairability.

SDRAM is not to be confused with SRAM, which stands for Static RAM (not that kind of static). This is the memory used for CPU caches, among other things. It’s much faster but also limited in its capacity, making it unsuitable as a replacement for SDRAM.

Thankfully, you likely won’t come across SRAM in general usage, so it’s not something you need to worry about.

The Form Factors of RAM

For the most part, RAM comes in two sizes: DIMM (Dual In Line Memory Module), which is found in desktops and servers, and SO-DIMM (Small Outline DIMM), which you’ll see in laptops and small form factor computers.

dimm-sodimm

Though the two form factors use the same technology and functionally work in exactly the same way, they obviously can’t be mixed. You can’t just jam a DIMM stick into a SO-DIMM slot, and a SO-DIMM stick won’t work in a DIMM slot (the pins simply don’t line up).

So when you’re buying RAM for a computer, the very first thing to know is its form factor. Nothing else matters if the stick won’t fit!

The Meaning of DDR

The RAM you use in your computer operates using Double Data Rate (DDR), which means that two transfers happen per clock cycle. Newer types of RAM are updated versions of the same technology, hence why RAM modules are labeled as DDR, DDR2, DDR3 and so on.

While all RAM generations are exactly the same size and shape, they still aren’t compatible. You can’t use DDR3 RAM in a motherboard that only supports DDR2, or vice versa.

Thankfully, each generation has a notch cut in the pins at different locations so they physically won’t fit. Even if you accidentally buy the wrong kind, you won’t have to worry about accidentally damaging your RAM or your motherboard when inserting the sticks.

crucial-generations

DDR2 is the oldest kind of RAM that you’re likely to come across today. It has 240 pins (200 for SO-DIMM). It’s been well and truly superseded, but you can still buy it in limited quantities to upgrade older machines when necessary. Otherwise, it’s obsolete.

DDR3 was released all the way back in 2007 and is the current standard, but is currently being superseded by DDR4. While DDR3 DIMMs have the same number of pins as DDR2 (DDR3 SO-DIMMs have 204 pins vs. DDR2’s 200 pins), they run at lower voltages and higher timings so aren’t compatible.

DDR4 is the newest kid on the block and is just starting to pick up mass market adoption despite being first released back in 2011. It drops the voltage even further from 1.5V to 1.2V while increasing the number of pins to 260. When buying a new motherboard, get one that’s DDR4 compatible.

Understanding RAM Jargon

So you’ve wrapped your head around SDRAM, DIMMs, and DDR generations, but what about the other long strings of numbers? What do they mean? And what about ECC and Swap? Here are other terms that you may need to know.

Clock Speed, Transfers, Bandwidth

You may have seen RAM referred to by two sets of numbers, like DDR3-1600 and PC3-12800. These both reference and allude to the generation of the RAM and its transfer speed. The number after DDR/PC and before the hyphen refers to the generation: DDR2 is PC2, DDR3 is PC3, DDR4 is PC4.

The number paired after DDR refers to the number of megatransfers (MT) per second. (For example, DDR3-1600 RAM operates at 1600 MT/s). The number paired after PC refers to the theoretical bandwidth in megabytes per second. (For example, PC3-12800 operates at 12800 MB/s).

It’s possible to overclock RAM, just like you can overclock a CPU or graphics card, to increase the RAM’s bandwidth. Manufacturers sometimes sell RAM that’s already overclocked, but you can overclock it yourself. Just make sure that your motherboard supports the higher clock speed!

Pro Tip: You can mix and match RAM modules of different clock speeds, but they’ll all run at the clock speed of the slowest module. If you want to make use of that faster RAM, don’t mix it with just any old RAM lying around.

Timing and Latency

You’ll sometimes see RAM modules with a series of numbers like 9-10-9-27, which are referred to as timings. This is a measurement of the performance of the RAM in nanoseconds. The lower the numbers, the quicker the RAM can react to requests.

The first number (9, in the example) is the CAS latency, or the number of clock cycles it takes for data requested by the memory controller to become available to a data pin.

You’ll notice that DDR3 RAM generally has higher timing numbers than DDR2, and DDR4 RAM generally has higher timing numbers than DDR3 — yet DDR4 is faster than DDR3 which is faster than DDR2. Weird, right?

ddr4

We can explain this using DDR3 and DDR4 as examples. The lowest speed DDR3 RAM runs at 533 MHz, which means a clock cycle of 1/533000000, or 1.87 ns. With a CAS latency of 7 cycles, total latency is 1.87 x 7 = 13.09 ns.

Whereas the lowest speed DDR4 RAM runs at 800 MHz, which means a clock cycle of 1/800000000, or 1.25 ns. Even if it has a higher CAS of 9 cycles, total latency is 1.25 x 9 = 11.25 ns. That’s why it’s faster!

But for most people, capacity trumps clock speed and latency every time. You’ll get much more benefit from 16 GB of DDR4-1600 RAM than you’ll get from 8 GB of DDR4-2400 RAM. In most cases, timing and latency should be the last points of consideration.

ECC

Error Correcting Code (ECC) RAM is a special kind that aims to detect and correct data corruption. It’s used in servers where errors in mission critical data could be disastrous, such as personal or financial information stored in RAM while being manipulated in a database.

ECC-compatible RAM isn’t supported by the vast majority of consumer motherboards and processors, so unless you’re planning to build a server that specifically requires ECC RAM, you should stay away from it.

How Much RAM Do You Need?

Long past are the days where “640K ought to be enough for anybody”. In a world where even phones are shipping with 4 GB of RAM and browsers often play fast and loose with their memory allocations, RAM frugality is a thing of the past.

I would suggest that 4 GB be the absolute bare minimum amount of RAM for a general usage computer, whether it’s running Windows 7 or later, OS X 10.7 or later, or most Linux distributions.

ram-usage

If you find yourself with no less than 6 Word documents open at any one time, can’t bring yourself to close those 60 tabs in Google Chrome, or find yourself needing to run a virtual machine every now and again, you’ll probably want to have at least 8 GB of RAM.

16 GB of RAM should exceed the needs of most, but if you’re the kind who needs those 20 utilities running in the background on top of a mountain of browser tabs and everything else, you’ll be glad for the extra space. Very few people need 32 GB of RAM, but as they say, more is more.

RAM, Now Demystified

Now that you know the difference between DDR2 and DDR3, between DIMM and SO-DIMM, and between codes for transfer speeds and bandwidth, shopping for RAM shouldn’t be quite so daunting.

Really, as long as you’ve got the right generation and the right form factor, you can’t go wrong. Timing and latency do play a role, but capacity is king. And when in doubt, more RAM is better than faster RAM.

How much RAM do you have in your computer? What generation of RAM do you have? Have you tried overclocking at all? Let us know in the comments below!

Image Credits: Businessman front by Mr.Exen via Shutterstock, Jacek Halicki, Dsimic, Tobias B Köhler via Wikimedia, Andrey_Popov via ShutterstockCrucial

Viruses, Spyware, Malware, Etc. Explained: Understanding Online Threats

When you start to think about all the things that could go wrong when browsing the Internet, the web starts to look like a pretty scary place. Luckily, Internet users as a whole are getting far more savvy, and better at recognizing risky online behavior.

While pages with a dozen download buttons – or auto-checked boxes that tricked us into downloading things we didn’t want – are no longer quite as effective as they once were, that doesn’t mean there aren’t hackers out there right now trying to come up with new methods of deception. In order to protect ourselves from these threats it’s important to understand just what they are, and how they differ.

Let’s dive in.

Understanding Online Security Threats and How They Differ

Malware

may-harm-computer-warning

Malware is short for malicious software. This means that while most of us refer to these threats as viruses, the correct catch-all term should indeed be malware. Malicious software comes in many forms, but malware itself is a general term that could be used to describe any number of things, such as viruses, worms, trojans, spyware, and others. In short, it’s a program or file with bad intentions, the nature of which could encompass just about anything.

Luckily, malware is exactly what all of the most popular antivirus programs are looking for. Getting affected by malware happens, and it doesn’t have to be catastrophic. Learn the right protocol for dealing with malware, and how to avoid it in the first place for the safest browsing experience.

Viruses

virus-abstract

Viruses consist of malicious code that infects a device after you install a software. Typically this infection happens through USB drives, Internet downloads, or email attachments, but it can happen in numerous other ways as well. It’s important to note that the infection doesn’t actually occur just from having the infected files on your computer. The infection happens once the program runs for the first time, whether through Autorun, a manual install, or an executable file that the user opens.

Once opened – or run – the infection happens. From that point, it can be very difficult to find and rid yourself of the virus due to the nature in which it works. While actual details are virus-specific, they tend to replicate themselves and infect the file system of the device they reside in by spreading from file to file before they are inevitably – and usually unknowingly – passed on to another machine.

Unlike other threats, viruses have no other purpose than attempting to render your computer inoperable. Some of them have been particularly good at it. Most others are quite weak and easy to detect.

Oh, and it should be pointed out – due to popular opinion – that Macs aren’t immune to viruses.

Adware

pop-up-ad-illustration

While relatively benign in most cases, adware might be the most annoying of the threats we’ll talk about today.

Adware is bundled with otherwise legitimate apps or software, which makes initial detection somewhat difficult. A common example is the checkbox at the bottom of a download link (often pre-checked) that asks if we want to “Include X for free” – well, “X” is often the program containing the adware. This isn’t a hard and fast rule, but it’s not uncommon. If you aren’t sure what these additional programs are, or how they function, don’t download them.

Adware infections are also possible through no fault of our own. Recent stories detail at least one major manufacturer including adware – or an adware-like browser hijack – in their computers by default. While Lenovo, and Superfish are the exception, rather than the rule, it’s important to note that these threats happen and often times there isn’t much we can do about it.

Trojans and Backdoors

hacker-access-granted

Trojans were named after the Trojan Horse, which was a giant wooden horse used to conceal Greek soldiers as they entered Troy during the Trojan War. History lesson aside, this is the same way that a trojan damages your computer. It hides malicious code inside a seemingly innocuous program or file in order to gain access to your machine. Once inside, the program installs itself on your device, and communicates with a server in the background without your knowledge. This gives an outside party access to your computer through what’s commonly referred to as a backdoor.

While giving an outside party access to your computer is scary in and of itself, the implications of what they could be doing with this access is even scarier. What complicates matters is the small footprint that these backdoors leave, which keeps the user completely in the dark that any privacy breech is even occurring.

One benefit of a backdoor is the nature in which they operate. Since the hacker must connect to your machine remotely, they won’t be able to do this if you disable the Internet connection while you attempt to locate and remove the malicious code.

Spyware

toolbar-spyware

Spyware is the most common piece of badware on the Internet. While it’s quite deceptive in nature and a major annoyance, most spyware is relatively harmless. Typically, spyware is used to monitor browsing behavior in order to better serve relevant ads. What makes it bad is how these companies go about collecting your data. Rather than relying on tracking pixels – or cookies – like most major companies, spyware acts like more of a trojan in that you install it and it communicates data from your computer back to a server, all while most of us are completely oblivious to its presence in the first place.

Other, more malicious forms of spyware, are far more dangerous. While typical spyware is mostly used for ad-serving purposes, malicious spyware communicates sensitive data back to another user, or a server. This data can include emails, photos, log files, credit card numbers, banking information, and/or online passwords.

Spyware is most often downloaded by the user as part of an add-on to a legitimate download (such as a toolbar) or included as part of a freeware or shareware program.

Scareware and Ransomware

scareware-winpc-defender

Scareware and ransomware differ in their approach, but the end goal for both is to collect money by manipulating the user into believing something that’s often untrue.

Scareware most often takes the form of programs that pop up and tell you that your computer is infected with some sort of malware. When you click to remove the (often) multiple instances of malware, you are forced to pay to purchase the full version before the program can clean your system and rid it of the infections or threats.

Ransomware operates a bit differently in the sense that after the malicious software is installed, it’ll often lock down your system outside of a window that allows you to pay the ransom in order to regain use of it. While ransomware is generally among the easiest threats to remove, it can be quite scary for a non-savvy computer user. As such, many believe that they must give in and pay the ransom in order to regain control of the machine.

Worms

dual-monitor-crash

Worms are by far the most damaging form of malware. While a virus attacks one computer and relies on a user to share infected files in order for it to spread, a worm exploits security loopholes in a network and can potentially bring the whole thing to its knees in a matter of minutes.

Networks with security vulnerabilities are targeted by introducing the worm into the network and allowing it to pass (often unnoticed) from computer to computer. As it passes from one device to another, the infection spreads until each machine is infected – or – the worm is isolated by removing the infected machines from the network.

Unnamed Exploits, Security Flaws and Vulnerabilities

No matter how competent the developer, every program has security flaws and vulnerabilities. These security flaws allow hackers to exploit them in order to gain access to the program, alter it in some way, or inject their own code (often malware) within it.

If you were ever wondering why programs had so many security updates, it’s because of the constant cat and mouse being played between developers and hackers. The developer attempts to find, and patch, these holes before they’re exploited, while the hacker attempts to exploit security flaws before they’re discovered and patched by a developer.

The only way to stay even remotely safe from these exploits is to keep your operating system and each of your programs up-to-date by installing updates as they become available.

Staying Safe Online

computer-keyboard

If you’re using the web, there’s no foolproof method to avoid all online threats, but there are certainly things you can do to make yourself safer.

Some of these are:

  • Keep your operating system and each of your programs up-to-date by downloading updates as they become available.
  • Install a good antivirus program and keep the virus definitions up-to-date.
  • Utilize a firewall that monitors both inbound and outbound traffic. Keep an eye on the flow of this traffic to help to detect the presence of threats that may be communicating with outside servers.
  • Avoid unsafe downloads from unknown and untrusted sources.
  • Use your antivirus program, or a malware detection program to scan suspicious links before opening them.
  • Avoid pirated software.

Again, if you spend any portion of your time on the web, it’s unlikely that you can completely protect yourself from all the badware out there. While infections and exploits can – and do –  happen to anyone, I don’t think any of us would argue that we could stay a little safer with subtle changes in our browsing or computer use habits.

What are you doing to keep yourself safe from threats and exploits online? Are there any specific programs or apps that you use for online security? Please help keep the rest of us safer online by sharing any tips you have in the comments below!

Photo credit: Computer Virus via Shutterstock, Warning! by Paul Downey via Flickr, Virus by Yuri Samoilov via Flickr, Annoying pop up via Shutterstock, Hackers – Seguridad by TecnoDroidVe via Flickr, Toolbars by mdornseif via Flickr, Malware by mdaniels7 via Flickr, Dual Crash by Dr. Gianluigi “Zane” Zanet via Flickr, Caps Lock by DeclanTM via Flickr

Decoding Intel’s Laptop Processor List [technology Explained]

intel processorsThe modern computer processor has always been a complex piece of technology, and that shows no signs of changing. Such complexity brings a challenge to companies such as Intel. Making great products is one thing, making them easy to understand is another.

Intel certainly has made an effort by attaching a series of numbers and letters to each of its specific products, such as the Core i7-2630QM. These all mean something – but what? Unfortunately, that’s not well explained.

Basics – The Brands

intel processors

First, before we go into the numbers and letters affixed to each processor, let’s review the brands.

Mainstream Intel processors are currently branded with the Core name, which is then supplemented by the i3, i5 or i7 brand. Higher is better. The Core i3 processors are the entry level, the i5 is mid-range, and i7 consists of high-end products including quad-cores. The main differences between them center on the Turbo Boost feature. Core i3 processors do not have it, while Core i5 and i7 processors do.

There are other brands, however. These include Pentium, which is a budget brand of scaled-down processors based off the same technology as Intel Core processors, and Celeron, which is a brand of extremely inexpensive processors with low clock speeds meant for ultraportable and budget laptops.

Only the Core processors share a common naming nomenclature, and are the most common, so they are what we’ll address from here on.

I’ve Got Your Number

intel processors

All of the Core processors have a naming system that operates like below.

Core [brand] + [processor number] + [suffix]

Core i7-2630QM, for example, has the processor number of 2630. Packed in this is more information. The first number represents the generation of the processor. The current Intel Cores are the second iteration since the new branding went into effect. The three numbers thereafter simply serve to tell you where Intel thinks the processor places in terms of performance relative to its other products. The higher, the better.

Intel didn’t provide the first generation with a number representing it, so the first generation Core processors are represented by just three numbers. The Intel Core i3-330M, for example, is a first-generation Core processor relatively low in that generation’s lineup.

Paying attention to the processor number is a simple way to gauge performance, all other things being equal. If you’re examining two laptops, one with a Core i5-2410M and another with a Core i5-2540M, you already know the second is quicker without ever looking at the specifications.

However, Intel added a caveat to this rule by including odd-number processors like the Intel Core i3-2357M. This processor is actually a low-voltage processor, which is to say it has a lower clock speed and lower TDP than normal mobile processors, resulting in worse performance but better battery life.

The Suffix – A Very Important Detail

intel processors

Although Intel attaches numbers to processors in order to align them in the company’s product line, not all products are easily compared. Quad-core processors are obviously going to have an advantage over dual-core options, and some are built with low power consumption as a goal. To communicate these differences, Intel adds letters to the end of their processors. All of the laptop processors have an M attached to show they are mobile processors, but there are more to be aware of.

One of the most important is Q, which represents a quad-core processor. Most of Intel’s Core i7 products are quads, which leads consumers to think they all are. That’s not true! All modern Intel mobile quads have the Q suffix. An exception is the Extreme Edition processor, which replaces the Q with an X. There’s only one second-generation Extreme Edition processor available at this juncture, however.

The E suffix is one you’ll see on a few products, but as a consumer you don’t particularly need to worry about. The letter stands for embedded, with means the processor can be utilized in embedded systems.

Finally, you should be aware of the U suffix. In the first generation of Intel Core processors this was used to designate a low-voltage product. This was dropped with the second generation in favor of an odd processor number, as was explained in the previous section.

Conclusion

When looking at an Intel powered laptop and judging the processor, do the following :

  • Check the brand. Is it Core i3, i5 or i7?
  • Look at the processor number, paying attention to the first numeral. Make sure the processor is of the latest generation.
  • Examine any suffix that might be attached.

These three bits of information will give you most of what you need to know about a mobile Intel processor. Once you understand how Intel’s laptop processor list is organized, making at-a-glance judgments isn’t difficult. Now let’s just hope Intel keeps this branding, rather than switching to some other scheme!

If you’re looking for more general information about laptops, be sure to check out our laptop buying guide and if you have any questions, leave them below in the comments, or in our helpful tech community.

Here’s What The Arrival Of 5g Means For You And Me

You might have seen ads for new “5G Evolution” networks from AT&T, and the other carriers are bound to follow suit.

News flash! That’s not 5G and you’re falling for an advertising gimmick. But 5G is coming soon, nonetheless.

Any Carrier Offering 5G Is Lying to You

Some carriers have jumped the gun, claiming they already offer 5G. This is a blatant lie. It’s actually only a faster and better version of current 4G LTE technologies. The advertising lies got so bad that the U.N. had to step in.

For every “G”, the mobile world (manufacturers, carriers, tower operators, etc.) have to agree on a set of specifications. The agency that defines these standards is the International Telecommunications Union (ITU), a United Nations body. And it finally came out with a set of specifications recently, that prove the “5G” claims of carriers are bogus.

AT&T’s “5G Evolution” does not meet these specifications, and neither will Verizon’s upcoming claim of delivering 5G. They are using 4G technologies like 4×4 MIMO (what is MIMO?) to boost data speeds, but they don’t match 5G. And while data speeds are important, 5G isn’t just about data speeds. There’s so much more to it.

What Is 5G?

Right now, 5G isn’t defined. No one knows the final specifications yet. But the ITU has so far agreed on a few key requirements for 5G performance.

5g logo

  • No Call Drops — This is the biggest change for the regular mobile user. You will not drop calls or lose internet connectivity when you’re switching between towers on a 5G network. The ITU says a 5G network is one where such mobile interruption time does not exist, so unless that specification is met, it’s not 5G.
  • Low Latency — 5G phones will have latency between 4ms and 1ms (the lower the better). Latency, measured in milliseconds, is the amount of time that your phone takes to send a signal to an internet server. The fastest for 4G is a latency of 50ms. Low latency will drastically improve experiences like augmented reality, or virtual reality with smart glasses.
  • Battery Efficient — You’ve heard about how data connectivity is draining your phone’s battery. The 5G networks will significantly reduce that with better “sleep” features. “Sleep” is when the phone isn’t using the network.
  • Works at 500 Km/h — The faster your vehicle is driving, the faster your phone is moving. This means your phone is changing mobile towers at a rapid rate. Current mobile networks can’t handle these rapid handovers between towers. 5G will work even when you are in a high-speed train travelling at 500 Km/h.

How Fast Is 5G?

5G networks will have a real-world download rate of 100 Mb/s (Megabits per second) and upload rate of 50 Mb/s. That’s about 12 MB of data download per second. An hour-long Netflix show in HD will download in roughly four minutes.

3g 4g 5g speeds

The ITU distinguishes between “peak data rates” (technically highest possible in lab settings) and “experienced data rates” (what users will get in real-world conditions).

5G will have peak data rates of 20 Gb/s (gigabits per second) downlink and 10 Gb/s uplink. In lab tests so far, Nokia and Samsung have managed downlink speeds of 10 Gb/s and 7.5 Gb/s, respectively.

The bottom line: it’s really, really fast!

When Will 5G Phones and Networks Be Available?

The estimate for regular consumers to get 5G phones is 2020. The U.S. will see the first lot of 5G-compliant devices in 2019, according to Intel and Ericsson executives. But the ITU’s timeline puts 2018–2020 as a period of “defining the technology,” so there might be some changes left.

itu imt2020

That 2020 estimate is partly based on the mobile world’s 10-year cycle of launching the next generation network. It started with analog 1G in 1982, moved to digital 2G in 1991, boosted to 3G in the 2000s, and landed on the current 4G networks since the 2010s.

South Korean and Japanese carriers have announced they will launch commercial 5G services in 2019. Major European operators target launching 5G in at least one city in each of the EU Member States by 2020.

What Frequency Bands Will 5G Use?

5G networks are expected to run in the 28 GHz, 37 GHz, and 39 GHz bands in the U.S., according to the FCC. Generally speaking, 5G will start at 30 GHz so that it can take advantage of millimeter waves. For comparison, 4G works between 700 MHz and 2100 MHz. A gigahertz is literally a thousand times stronger than a megahertz.

high spectrum band

5G is the first network to work on millimeter waves, which will allow for much higher data transfers than currently possible. Millimeter waves will also reduce the size of antennae needed. This means wearable technology like smartwatches or smart glasses will be able to fit 5G connectivity more easily.

What Are the Differences Between 4G and 5G?

4g vs 5g differences

What Are the Advantages of 5G?

For the end consumer, 5G will have tangible benefits like:

  • Faster download and upload speeds.
  • Better video calling, especially while moving.
  • Better battery life on phones.
  • Fantastic augmented reality, since 5G tracks users in real time and loads data faster.
  • Virtual reality mobile experiences, due to speeds that can support real-time 4K streaming on mobiles.
  • Improvements in wearable technology, like fitness bands that can track your every move.
  • More IoT (Internet of Things) and smart devices, since 5G antennae are smaller and more battery-efficient than current technology.
  • Better driverless cars, since latency is dramatically reduced.

There is a lot more to 5G, of course, including indirect benefits. For example, it brings long-term cost benefits to the mobile industry, which in turn, will pass on those benefits to the consumer.

Carriers seem to be marketing their advanced LTE speeds as “5G” purely based on better download speeds. Are faster download speeds the most important thing for you? Do you think it’s okay for them to call it 5G based on that? Let us know your thoughts below!

Touch Something That Isn’t There – Haptic Technology [makeuseof Explains]

haptic technologyHaptics is the technology of touch. In the context of a virtual environment, it would mean being able to touch and feel something that literally isn’t there, but that’s certainly not its only use. From gaming and virtual reality to 3D modelling and making computers more accessible – virtual surgery, driving simulators in which you can actually feel the surface and conditions of the road, swinging a virtual sword and feeling the motion as it smashes against the enemy’s armor, the texture of paper or moleskin on a touchscreen display – this is haptic technology at its most amazing, and the implications are far reaching.

The Past & Present

My first experience of haptic technology was at a consumer technology show about 10 years ago – it was pretty new then, and sadly things haven’t progressed an awful lot since then – but they have got a lot cheaper, and we’re on the cusp of a great revolution – so it’s about time you knew about what’s coming. The toy I was privileged to have a play with was a pen for 3D modelling. The pen floated in 3D space, attached to a base unit with a single movable arm. By simply moving the pen around, you could move an on-screen 3D modelling or sculpting tool.

But the amazing part of this pen was that when your tool hit the 3D object, it would stop, right there, preventing you from moving any further in that direction. By moving your tool around the surface of the object, you could literally feel the shape of it. Though immensely fascinating, this didn’t help my modelling skills one bit. Give me a lump of real clay and I’d do just as bad – but in the right hands, it makes 3D modelling a thoroughly more realistic experience.

Here’s a video of a similar device in action, though obviously it’s difficult to portray the haptic feedback in a video.

This idea was then extended to the entire hand, as this video from 2010 demonstrates. With a 3D display, the glove wearer can physically interact with a virtual object using haptic technology, touching what isn’t there.

You think that’s cool? That’s nothing. How about not having to wear a glove at all, yet still being able to feel a holographic object? Yes, exactly like Minority Report or Iron Man. The system uses ultrasonic jets to project the sensations combined with traditional holography.

Haptic technology is also known as “tactile” feedback, but force feedback in gaming controllers is also a form of haptic technology – called “kinesthetic” –  in which the user experiences movement such as the resistance a steering wheel might give when turning. High end joysticks and steering wheel controllers on the market today provide feedback to the gamer by vibrating or resisting motion just as a real car or aeroplane might.

Nearly all modern consoles include at least a basic vibration for the sensation of firing a gun, though the effect isn’t at all realistic (this is probably a good thing though, as the actual pushback from a lot of guns would probably break our untrained arms).

The WiiRemote was the first console pointing device to implement haptic feedback for general user interfaces. The remote would “buzz” and snap to a menu element when you hovered over it. If you’ve used a Wii, there’s a chance you haven’t even noticed it – it’s such a natural enhancement and a great example of haptic technology done right.

Launched this year and still seeking partners for integration, Vivitouch have developed a highly responsive virtual muscle, able to depict vibrations at a far great level of realism. They’ve branched into two technologies, one for headphones for more realistic audio, and one into mobile devices for more immersive gaming.

Here’s a promotional video of the gaming side, which is currently only available for iPod Touch 4th Gen and a small assortment of compatible games.

The Future

Tactus has shown us their vision of a haptic future, one in which smart phone buttons physically emerge from the screen as required; creating a dynamic physical keyboard on a standard touchscreen. It’s an impressive feat, no doubt, but how useful might they actually be?

The niche for these devices might eventually be carved in accessibility features – making touch screens and technology in general more useful for blind consumers. One can easily imagine how such a haptic interface could display braille to read text elements, for instance – external devices currently handle this task, but combining the two would make a lot of sense.

Immersion and Senseg have taken a different approach to emulating tactile feedback. Instead of physically altering a device, they use electronic currents to generate the feel of a surface (Senseg calls these “Tixels”).

At the launch of the iPad 3 last year, rumors abounded that the device would include either Senseg’s haptic feedback tech or a proprietary system Apple had also claimed a patent on (using piezeo-electric actuators instead of Senseg’s electrical fields), but neither have come to fruition just yet. It’s likely we’ll see more of these in mobile devices in the immediate future.

Tactical Haptics have most recently demoed a kinesthetic feedback controller built around a Razer Hydra motion peripheral – it’s able to accurate simulate the feel of slicing a sword or the swinging motion of a flail – so it’s easy to see how it could really make for a more immersive gaming experience, especially when combined with the Oculus Rift VR headset.

As you can see, haptic technology already plays a part in our lives, but it’s another immersive technology that – like virtual reality – has so many as-yet unrealised applications. At least now, you’ll know what it is when it gets here. Like most new technologies, I expect it’ll be used for gaming first!

Have you had a chance to play with some haptic or force-feedback devices, and if so, what are your thoughts on them? Did they make the experience more immersive, or make you better at the task – or was it just a weird distraction, or a gimmick?

What Is Binary? [technology Explained]

what is binary codeGiven that binary is so absolutely fundamental to the existence of computers, it seems odd that we’ve never tackled the topic before – so today I’d thought I’d give a brief overview of what binary actually means and how it’s used in computers. If you’ve always wondered what the difference is between 8-bit, 32-bit, and 64-bit really is, and why it matters –  then read on!

What is binary? The difference between Base 10 and Base 2

Most of us have grown up in a base 10 world of numbers, by which I mean we have 10 ‘base’ numbers (0-9) from which we derive all other numbers. Once we’ve exhausted those, we move up a unit level – 10’s, 100’s, 1000’s – this form of counting is hammered into our brains from birth. In actual fact, it was only from the Roman period that we started counting in base 10. Before that, base 12 was the easiest, and people used their knuckles to count.

When we learn base 10 in elementary school, we often write out the units like this:

what is binary code

So the number 1990 actually consists of 1 x 1000, 9 x 100, 9 x 10, and 0 x 1. I’m sure I don’t need to explain base 10 any further than that.

But what if instead of having a full selection of 0,1,2,3,4,5,6,7,8,9 to work with as the base numbers – what if we only had 0, and 1. This is called base 2; and it’s also commonly referred to as binary. In a binary world, you can only count 0,1 – then you need to move to the next unit level.

Counting in Binary

It helps immensely if we write out the units when learning binary. In this case, instead of each additional unit being multiplied by 10, it’s multiplied by 2, giving us 1,2,4,8,16,32,64 … So to help calculate, we can write them out like this:

what is binary

In other words, the right-most value in a binary number represents how many 1’s. The next digit, to the left of that, represent how many 2’s. The next represents how many 4’s… and so.

With that knowledge, we can write out a table of counting in binary, with the equivalent base 10 value indicated on the left.

what is binary code

Spend a moment going over that until you can see exactly why 25 is written as 11001. You should be able to break it down as being 16+8+1 = 25.

Working backwards – base 10 to binary

You should now be able to figure out what value a binary number has by drawing a similar table and multiplying each unit. To switch a regular base 10 number to binary takes a little more effort. The first step is to find the largest binary unit that “fits into” the number. So for example, if we were doing 35, then the largest number from that table that fits into 35 is 32, so we would have a 1 there in that column. We then have a remainder of 3 – which would need a 2, and then finally a 1. So we get 100011.

8-bits, Bytes, and Octets

The table I’ve shown above is 8-bit, because we have a maximum of 8 zeroes and ones to use for our binary number. Thus, the maximum number we can possibly represent is 11111111,  or 255. This is why in order to represent any number from 0-255, we need at least 8-bits. Octet and Byte is simply another way of saying 8-bits. Therefore 1 Byte = 8 bits.

32 vs 64-bit Computing

Nowadays you often hear the terms 32-bit and 64-bit versions of Windows, and you may know that 32-bit Windows can only support up to 4 gigabytes of RAM. Why is that though?

It all comes down to memory addressing. Each bit of memory needs a unique address in order to access it. If we had an 8-bit memory addressing system, we would only be able to have a maximum of 256 bytes of memory. With a 32-bit memory addressing system (imagine extending the table above to have 32 binary unit columns), we can go anywhere up to 4,294,967,296 ? 4 billion bytes, or in other words – 4 GIGAbytes. 64-bit computing essentially removes this limit by giving us up to 18 quintillion different addresses – a number most of us simply can’t fathom.

IPv4 Addressing

The latest worry in the computing world is all about IP addresses, in particular IPv4 addresses, like these:

  • 192.168.0.1
  • 200.187.54.22

They actually consist of 4 numbers, each representing a value up to 255. Can you guess why? Yep, the whole address is represented by 4 octets (32 bits in total). This seemed like an awful lot of possible addresses (around 4 billion in fact) at the time the internet was first invented, but we’re rapidly running out now that everything in our life needs to be connected. To solve this, the new IPv6 uses 128 bits in total, giving us approximately 340 undecillion (put 38 zeroes on the end) addresses to play with.

I’m going to leave it there for today, so I can get back to my original aim which was to write the next Arduino tutorial – in which we make extensive use of a bit-shift register. I hope today has given you a basic understanding of how binary is so significant to computers, why the same numbers keep appearing, and why the number of bits we have to represent something places a finite limit on amount of memory, screen size, possible color values, or unique IP addresses available to us. Next time, we’ll take a look at binary logic calculations, which is pretty much all a computer processor does, as well how computers can represent negative numbers.

Comments? Confusion? Did you find my explanation easy to understand? Whatever the case, please get in touch in the comments. I shall leave you with a binary joke!

There are only 10 types of people in the world: those who understand binary, and those who don’t.

Image credit: Shutterstock