Connect with us

Tech

Intel adapts to survive competition from Apple and AMD

Intel announced major changes in strategy, not only offering its own designs to other companies, but also outsourcing manufacturing of some of its own CPUs, as well as manufacturing ARM chips for other companies

Published

on

Intel CEO Pat Gelsinger

For the last quarter century, Intel has built processors using its own, closely guarded designs and manufactured them in their own facilities. This formula has served it well, often dominating the market for long periods of time. But the past five years have seen the chip giant go from practically unchallenged market dominance to being on the back foot, even in the gaming PC market which has been dominated by Intel since the Core2 processors were released in the mid-2000s. So last month, Intel announced major changes in strategy, not only offering its own designs to other companies, but also outsourcing manufacturing of some of its own CPUs, as well as manufacturing ARM chips for other companies.

Companies in this position are often there due to complacency, but in this case, Intel was not sitting back and actually had a solid roadmap in place just before it all started to go wrong. There are two major aspects of a processor, and indeed other chips such as GPUs (Graphics Processing Units) – the Architecture, basically the design and layout of a processor, and feature size or process technology, which is measured in nanometers (nm) these days and refers to the size of transistors, the fundamental building blocks of any chip. While there have been some issues with the Architecture, the main problems Intel has faced have been delays in its transition from 14nm to 10 nm.

If you are wondering what all this has to do with anything, it all comes down to physics. The smaller the transistors, the more you can pack into a processor of the same size and the less energy is required to run each one, which in turn means more performance with the same energy, often with the side-effect of less heat being generated. While Intel has been stuck on a larger size, AMD uses TSMC (Taiwan Semiconductor Manufacturing Company) to manufacture its processors, a company powering ahead with smaller feature sizes. 

AMD, under the leadership of CEO Lisa Su, who has an electrical engineering background, has gone from strength to strength with each of their new Ryzen processors. Last year’s Ryzen 5000 series of processors not only crushed Intel’s offerings in productivity tasks such as video editing and graphic design, as AMD CPUs have done for the past few years, but they also finally matched Intel’s performance in many games and even came out on top in some of them. The high-end gaming PC market has been dominated by Intel CPUs for over a decade, so this marks the end of an era. AMD’s rise coupled with years of delays to its 10nm process have resulted in a perfect storm for Intel.

But Intel’s processor woes don’t end there. In 2019, Apple announced it would be transitioning away from Intel to its own, ARM-based processors for their Mac computers, having used their own chips in iPhones and iPads since 2010. Apple has used Intel CPUs in their Macs since 2006 and, while Intel tried to downplay the impact of Apple’s decision on their business, it was clearly a huge blow for the company. Apple has always tried to design as many of the components used in its devices as possible, but the company most likely decided to switch when Intel’s continued delays to its 10nm process started affecting its ability to release Macs with the capabilities they wanted at the times they needed, probably around 2015.

Ironically though, Apple was only really able to do this because of Intel’s lack of foresight over 15 years ago, when Steve Jobs asked it to make a mobile processor for an Apple phone. Intel’s leadership at the time did not feel it was worth the investment required to design and build a chip for a device they thought would not be high volume. Thanks to that snub, Apple put an ARM-based chip in the iPhone and, in 2010, launched the first iPad and the iPhone 4, both featuring the Apple A4 chip, the companies first designed in-house. Apple designed chips are now considered among the best in the world, delivering performance that has kept its iPhones and iPads at the cutting edge of performance, despite fewer cores and lower specs on paper than its rivals’ offerings.

Thanks to the expertise and success of Apple’s silicon design teams, not only was Apple able to make the switch, its first Apple Silicon chip for Macs, the M1, saw huge gains in performance and battery life compared to Intel offerings in the same category. While that is partly due to Apple being able to focus on performance of targeted areas their market needs more, TSMC’s superior process is certainly a significant factor.

Intel Wafer: Processors are manufactured on wafers, before being cut into individual chips

Which brings us to last month’s huge announcements representing a fundamental shift in the principles Intel has operated on for decades. While much of Intel’s success has come from its formula of using its own designs manufactured in-house, the chip giant is making three changes. First, it will continue to build the majority of its own chips, but many will be made using newer technologies (such as Extreme ultraviolet lithography or EUV) that TSMC and Samsung have been using for some years now. Second, Intel will leverage external suppliers to manufacture some of its core processors, giving the company access to the best manufacturing processes for products that would benefit from them. Finally, the chip giant will become a foundry, manufacturing x86 (which their own processors currently use) and ARM chips, with the aim of becoming a major supplier of foundry to the industry. That last one is how Intel intends to try and win back business from Apple.

The new strategy, dubbed ‘IDM 2.0’, was announced by Intel CEO Pat Gelsinger, who took over from Bob Swan in February this year, having come back to Intel after 12 years away, most recently as CEO of Dell-owned VMWare. Gelsinger immediately berated the company’s failure to keep up with competitors, as well as the loss of Apple’s business.

These changes won’t happen overnight and much of what was announced has already been in progress for many months. The semiconductor industry is not fast moving, as chip design and development of new manufacturing process technologies takes years. In the meantime, AMD will continue to assert itself in the market and Apple will complete its transition away from Intel processors next year. Intel still has some tough times ahead, but it is clearly working to ensure the company comes back fighting. One thing is certain however, the market for processors, which power personal computers, smartphones, tablets and more, is as competitive as it has ever been, which should mean better performance and maybe even battery life for our computers and devices in the years ahead.

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Science and Technology Editor at The Analyst.
Has a passion for technology and what makes things tick.

Continue Reading
1 Comment

1 Comment

  1. nc

    9 April 2021 at 10:52 am

    Very interesting. Compounds the situation on top of what’s already happening with the big tech firms + Huawei. Intel’s plan to resurface is quite poor aswell 😐 …

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Is Privacy a Price Worth Paying for Online Communication?

In this episode we discuss data privacy concerns on social media platforms and messaging apps that have become such a prominent part of our lives

Published

on

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Tech

Electronic communication or printing: Are either of them the better alternative?

Published

on

Print versus electronic communication; a debate ever since companies started advertising themselves as pro-environment. Banks, Telephone providers, Utility service providers, and many more have started to send bills, promotions, flyers, etc. through digital communication and customers are encouraged to save the environment by going paperless. While going paperless is considered best for the environment, it has a lot of disadvantages. Using paper is considered harmful to forests, furthermore, the main reason for deforestation is considered to be production and usage of paper. However, an in-depth analysis proves to be contradicting that belief. 

Printing is always put into the spotlight for not being ecological but digital media’s dependence on coal-powered electricity always goes unnoticed by many people. As consumers go paperless, the use of electronic media increases. Most of the energy used to power electronic media in North America comes from coal-powered plants. Coal companies are cutting trees leading to areas of deforestation and remove mountaintops to reach thin coal which seems buried deep in the mountains. As the use of digital media increases, a significant amount of technology is using energy from coal-fired electricity to power. This act is contributing greatly towards global warming. Adoption of digital media and cloud-based technology is consuming more energy now, than it has ever before. It is estimated by the end of this year, “data centers will demand more electricity than is currently demanded by France, Brazil, Canada, and Germany combined”. In 2012, more than $1 billion worth of electricity was used by networking equipment in America alone, which is equivalent to three large coal-fired power plants. Mountaintop coalmining is becoming a major cause of not only deforestation, but also destruction of ecosystems, pollution and the emission of greenhouse gases, that are contributing towards global warming.

The increased use of digital media also translates towards more devices being bought and used by more people. As the electronics industry races to make better and more advanced version of electronics, older technology is becoming obsolete, and many consumers are not recycling their electronics. Computers and most electronics contain toxic and recyclable materials. Recyclable materials like gold, palladium, platinum, copper, nickel, etc. are taken out of the electronics once it is sent for recycling but toxic materials like lead, zinc, nickel, etc. which should be handled with care, are left in landfills or sent to many underdeveloped countries. Toxic materials in electronics when released into the environment cause damage to human blood, kidneys and nervous systems. The materials from e-waste not only affect humans, but also can affect both land and sea animals. Moreover, recyclable materials like copper are extracted from the electronics by burning the devices which release hydrocarbons into the air. Lead poisoning is one of the biggest causes of deaths, in underdeveloped countries where most of the e-waste is dumped into landfill. Heavy metals eventually reach groundwater through the soil and effect the drinking and clean water that is then consumed by the many locals living near these landfills. 

Paper manufacturing has gotten a bad reputation over the years as issues with the environment have gotten more intense. Many paper manufacturing companies are using ways that would avoid harming the environment. According to the American Forest And Paper Association, more than 65% of paper in the US was recycled in 2012, making paper the biggest commodity to be recycled. Companies are using sustainable ways to manufacture paper and recycle the majority of it. Paper manufactured from the recycled fibres uses less energy and natural resources like wood, which in return decreases environmental pollution. Paper manufacturing companies have joined or launched programs for sustainable usage and production of papers like Integer Goal Programming, prevent any harm to the environment and maintain it. The paper industry has many professional certification programs to ensure the sustainability of the paper used today. Two of the more recognisable certifications are the “Forest Stewardship Council® (FSC®) and the Programme for the Endorsement of Forest Certification (PEFC™)”. 

Many companies that advertise being green and going paperless are known to have done very less research as electronic communication has bigger impact on the environment when compared to print media. As consumers find alternatives to print media, they forget to research the effects and contribution of electronics towards global warming. Upon comparison it can be seen that while the paper industry is moving towards sustainable production and avoiding harm to the environment, the electronics industry is lacking in this . Increased usage of digital media in turn requires a lot of energy, which is contributing towards deforestation and global warming. To conclude, digital media has more negative impacts on the environment when compared to the print industry. 

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Tech

Screen addiction: A silent health crisis in the making

Impact of excessive screen time on human brains is not the same for adults and children. Adult brains are more developed than children and have advanced social skills with self-control

Published

on

Apple released its first-generation iPhone in 2007 and life changed forever. Netflix started its streaming services in the same year. Twitter launched in 2006 soon becoming a global social platform, followed by Instagram, and Tik Tok in recent years. One billion hours of content is consumed every day on You-Tube. From shopping to banking, socialization to working and learning, there is hardly any aspect of human life that does not rely on digital devices. Technology has transformed the world, making it much smaller and smarter that can fit into one’s pocket.

While technology has enabled on-the go and on-demand lifestyles, its long-term impact on our minds and bodies has not been studied enough. Health care professionals agree that the negative impacts of too much screen time are as imperative to know as its advantages. Prolonged and excessive use of digital devices to scroll through social media, compulsive online shopping, inability to stop checking for emails or texts can be as addictive as tobacco, alcohol, or drugs. Screen use releases dopamine in the brain, which can negatively affect impulse control; like drugs, screen time sets off a pleasure/reward cycle. Brain imaging studies have shown that screen technologies and cocaine affect the brain’s frontal cortex in the same way. Though not officially recognised as a disorder, various terms are used to define the addictive digital behaviours, such as Internet Addiction Disorder, Compulsive Internet Use (CIU), Problematic Internet Use (PIU), or iDisorder and Screen Dependency Disorder (amongst younger populations). Some studies indicate that up to 38% of the population in the western world suffer from some form of screen addiction. Moderate usage of digital devices for day-to-day tasks does not imply internet or screen addiction. Rather, when online activities start to interfere with one’s life, similar to pathological gambling, it can then be identified as a form of addiction. Too much screen time restructures the brain in form of shrinkage or loss of brain tissue volume resulting in lower cognitive abilities, depression, anxiety, obesity, back and neck problems. Due to lack of real human connection, screen addiction changes how we interact. A screen savvy individual is more likely to have more social limitations, such as withdrawal, mental preoccupation, impulsive behaviours, poor self-esteem, and unstable relationships. Other much talked about impacts are sleep disorders, headaches, migraines, and vision issues (dry eyes). 

Impact of excessive screen time on human brains is not the same for adults and children. Adult brains are more developed than children and have advanced social skills with self-control. In contrast, young and adolescent brains are “not matured and are predisposed to changes in structure and connectivity that can restrict neural development”. 

Digital devices draw young minds in, they are portable and ubiquitous that do not separate the device from the child’s eyes. According to a research report released in October 2019, eight to 12-year-olds in the United States now use screens for entertainment for an average of four hours, 44 minutes a day, and 13 to 18-year-olds are on screens for an average of seven hours, 22 minutes each day. Children who spend more than two hours a day on screen-time activities score lower on language and thinking tests, and some children with more than seven hours a day of screen time experience thinning of the brain’s cortex, the area of the brain related to critical thinking and reasoning. Tech companies are developing highly addictive and entertaining digital devices, content and mediums that keep children away from non-digital activities that foster imagination, creativity and build appropriate social skills. In younger children, psychologists are discovering major developmental concerns such as speech delays, cognitive impairments, and inadequate problem-solving abilities. Children are often better at finding entertaining content that links to other content leading to an endless cycle, making children uniquely vulnerable to digital advertising, cyber bullying and even exposure to predators. Children cannot grasp the unregulated span of the internet and resulting addictive behaviours. 

Adolescents using social media excessively have tendencies to internalize problems and world view of their appearances, race, gender, or popularity leading to increased risk of major depression, suicidal thoughts, and anxiety disorders. A 10-year study on teen girls showed increased suicide risk for teen girls who spent excessive amount of time on social media. The suicide rate for podiatric patients in United States rose to 57.4% from 2007 to 2018. Many agree that this exponential increase is linked to excessive social media presence. Kids and adults are spending even more time online during the pandemic, adding to the social isolation and anxiety.

 The question remains that why the research data and general awareness of negative impacts of digital devices is so limited. Like tobacco, alcohol, drug manufacturers and casinos, tech companies are monetary giants with the unique privilege of shaping public opinions. Screen addiction is relatively a new challenge to a world that was not prepared for this digital explosion that occurred in less than two decades. Researchers, policy makers, and the public health community has not grasped the universal impact of screens. A multi prong strategy focusing on regulation, creation of safer environments, mental health awareness and reengineering of technology for positive uses is much needed to reap the real benefits of a digital world.

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Economics

Is Cryptocurrency a Viable Replacement? – In Focus

How does Cryptocurrency differ from standard or Fiat Currency and what’s next for these distributed networks?

Published

on

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Tech

James Webb Space Telescope: NASA’s $10 billion project set to launch in October after 25 years in development

Although development of the James Webb Space Telescope (JWST) began in 1996 and was initially planned to launch in 2007 with a $500 million price tag, the project had experienced many challenges over the coming years both delaying the launch and inflating the price

Published

on

NASA’s James Webb Space Telescope, CC BY 2.0, via Flickr

Although development of the James Webb Space Telescope (JWST) began in 1996 and was initially planned to launch in 2007 with a $500 million price tag, the project had experienced many challenges over the coming years both delaying the launch and inflating the price. Now, 25 years later with nearly $10 billion spent, this international space project featuring NASA, the European Space Agency (ESA) and the Canadian Space Agency (CSA) passed its final performance testing in February and is set to launch on 31st October 2021. However, in order to fully grasp the impact that the JWST will have on our understanding of the Universe, we must first look at its predecessor, the Hubble Space Telescope. 

Launched in 1990, the Hubble Space Telescope was one of the largest and most advanced space-based telescopes sent into orbit. Hubble enjoys a clear view of the Universe free from Earth’s atmospheric interference and ; as a result, it was able to detect and examine distant stars, galaxies and other cosmic objects that were invisible to us. A total of five service missions from 1993 to 2009 upgraded the telescope which further advanced its capabilities. For 30 years Hubble has greatly assisted astronomers from around the world uncover the mysteries of the Universe. However, even a telescope of Hubble’s stature has its limitations. The primary mirror of a telescope determines how much light it can gather. The larger the mirror, the more light it can gather making even the most faint objects visible. Hubble’s primary mirror has a diameter of eight feet while the JWST has a mirror measuring at 21 feet. Furthermore, Hubble’s instrument quality and type of light wavelengths it can observe are also limited as there is only so much that can be replaced and upgraded. To summarise, Hubble has cemented itself as one of the most remarkable technological inventions created by humans and will continue to operate for the foreseeable future, but it has reached its limitations in terms of what it can observe and its successor – the JWST- is ready to embark on its mission. 

The JWST will seek to bridge the gap between what is known and unknown in the Universe. Its capabilities go beyond that of Hubble’s as its larger primary mirror will allow it to look further in the past and inside stellar dust clouds where new stars and galaxies form. Moreover, the JWST will be able to observe more closely the moments right after the Big Bang and hunt for the first galaxies formed that were otherwise invisible to us. Additionally, one of the most prominent features of the JWST is its tennis court-sized sunshield which will reduce the sun’s heat by a million times. The purpose of the sunshield is to keep the mirror and the instruments as cold as possible, since if the Sun were to heat them up, they would emit their own infrared radiation. This becomes problematic as it would drown out the faint infrared radiation from distant galaxies the telescope was built to detect. 

Once in orbit, the JWST will be utilised by thousands of astronomers to extend and complement the discoveries of its predecessors. It will seek to unlock the remaining mysteries of the Universe and answer questions that have puzzled astronomers for centuries. It will search the deepest voids of space and time – to answer questions about the origins of galaxies, stars and their planets. Additionally, it will provide astronomers with a clearer sense of the fate of the Universe and everything within it. Therefore, not only is the JWST a device to study the past, but consequently the future as well.

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Tech

Is Working from Home the New Normal? – In Focus

As lockdowns begin to lift, will people go back to the office or is working from home really the new normal for many? Our brand new series, In Focus, looks at angles to provide a better understanding of the issues.

Published

on

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Recent Comments

Articles