Connect with us

Tech

Artificial Intelligence – friend or foe?

Artificial intelligence refers to the simulation of human intelligence in machines that are programmed to mimic the learning, perception, and problem-solving skills portrayed by the human mind.

Published

on

Artificial Intelligence – friend or foe

What is Artificial Intelligence?

Artificial intelligence refers to the simulation of human intelligence in machines that are programmed to mimic the learning, perception, and problem-solving skills portrayed by the human mind. 

It works by learning from examples and experience, understanding and responding to language, recognising objects, and combining these as well as other capabilities to perform functions that are similar to what a human could perform. 

Artificial intelligence is a branch of computer science and an endeavour that aims to replicate human intelligence in machines. Its use has sparked many debates and led to many unanswered questions, to the extent that there is not a singular definition of artificial intelligence that is universally accepted. To put it simply, it has been defined by Stuart Russell and Peter Norvig in their book Artificial Intelligence: A Modern Approach as “the study of agents that receive precepts from the environment and perform actions.” 

Background of Artificial Intelligence 

Alan Turing, a British mathematician explored the mathematical possibility of artificial intelligence. He suggested that if humans use information that is available to them and use it along with reason to solve problems and make decisions, why can’t machines do the same. This concept was discussed in his 1950 paper Computing Machinery and Intelligence, where he discussed how to build and test the intelligence of intelligent machines. 

What initially stood in Turing’s way was the lack of key prerequisites for intelligence in computers. Before 1949, they could execute commands, but couldn’t store and remember them. On top of that, the cost of leasing a computer was also very high, and advocacy of high-profile people as well as proof of concept was required to receive funding. 

The first artificial intelligence programme is believed to be The Logic Theorist programme. It was funded by Research and Development (RAND) and was designed to imitate the problem-solving skills of humans. This event was very important in catalysing the next 20 years of AI research

Despite what we might think, artificial intelligence programs are everywhere around us. They generally fall under two categories: Narrow AI and Artificial General Intelligence (AGI). 

Narrow AI works within a limited context and usually performs single tasks very well. Although these seem very intelligent, they work under constraints and are far      more limited than even the most basic human intelligence. Narrow AI is the most common and has been very successful throughout the last decade and has yielded great societal benefits. Some examples of Narrow AI include search engines such as Google, personal assistants like Siri and Alexa, and image recognition software. 

The other broad category of AI is AGI, which is a machine that has general intelligence which it can apply to solve problems, like humans. We tend to see this kind of AI in movies like the robots from Westworld or Terminator from The Terminator.   

It’s important to note however, that AGI does not exist and the quest for AGI in reality has been met with a lot of difficulty. A “universal algorithm for learning and acting in any environment” is extremely difficult, and creating a machine or a program that contains a complete set of cognitive abilities is a nearly impossible task. 

Are there risks to Artificial Intelligence?

There is no doubt that AI has been revolutionary and world-changing, however this isn’t without risks and drawbacks. 

“Mark my words, AI is far more dangerous than nukes.” This was a comment made by Tesla and SpaceX founder Elon Musk at the South by Southwest tech conference in Austin, Texas. “I am really quite close… to the cutting edge in AI, and it scares the hell out of me,” he told his SXSW audience. “It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.” Although these ideas may seem extreme and far-fetched to some people, he isn’t alone in holding these views. The late physicist Stephen Hawking told an audience in Portugal that the impact of AI could be catastrophic if its rapid development isn’t controlled ethically and strictly. “Unless we learn how to prepare for, and avoid, the potential risks,” he explained, “AI could be the worst event in the history of our civilization.”

How would AI get to this point exactly? Cognitive scientist and author Gary Marcus shed some light on this in a 2013 New Yorker essay, Why we should think about the threat of artificial intelligence. He stated that as machines become smarter, their goals could potentially change. “Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called ‘technological singularity’ or ‘intelligence explosion,’ the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed”. 

It should be noted that these risks are associated with AGI, which is something that hasn’t yet come to fruition and so the risk at the moment is no more than a hypothetical threat. Although AI has been at the centre of dystopian science fiction, experts agree that it isn’t something that we need to worry about anytime soon. The benefits of AI technology as of now far outweigh drawbacks as it has improved the quality of lives of many. It has helped reduce the amount of time required to spend on a task and has enabled multi-tasking. Due to decisions taken from previously gathered information, errors are reduced significantly. 

There are advantages and disadvantages of AI, as with most technological inventions. As humans, it is important to consider all of the issue with care. Ultimately we must utilise the benefits that AI provides for the betterment of society.

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Extremism

Kiwi Farms: Far right extremist website blocked over harassment

Kiwi Farms, an internet forum that facilitates online discussion and harassment particularly of neurodiverse and trans personalities, came under scrutiny for doxing people.

Published

on

kiwi farms

Earlier this month Kiwi Farms, an internet forum that facilitates online discussion and harassment particularly of neurodiverse and trans personalities, came under scrutiny for doxing people. Notably it targeted a trans Twitch streamer, Clara Sorrenti who as a result had to flee the country. At first, the content delivery network Cloudflare, refused to stop providing its services. However, after a while, on 3rd September Cloudflare stopped protecting the website.

Sorrenti was just one of the victims of this far-right website that caused at least three suicides. Kiwi Farms was launched in 2009 as CWCki dedicated to referencing the online presence of Christine Weston Chandler aka Chris Chan aka Sonichu.  It officially changed its name in 2015 and soon gained a lot of popularity. The format of the website is simple – identify a victim, label them a “lolcow” – an online slang term for someone who can be exploited and made fun out of – and then stalk them to the point of harassment.

Some recent victims of the trolling website:

The far-right Georgia Republican Marjorie Taylor Greene became a target of this website when she became a victim of a swatting incident. In this, a fake call was made to the authorities to bring them to her house in August of this year.  

A caller connected to Kiwi Farms called police officers and told them that a man was shot five times in a bathtub at the address of Greene. According to the police report, the police later received a computer-generated call that stated she was targeted because of her stance on “transgender youth’s rights.”

In response, Greene stated: “There should be no business or any kind of service where you can target your enemy. That’s absolutely absurd.”

In late August, there was a bomb threat at the Boston Children’s Hospital after which they had to contact the authorities. The threat was anonymous but luckily no bomb was found.  The hospital was attacked for providing gender-affirming hysterectomies to children. The bomb threat resulted after a week-long cyber-attack on the hospital as one statement read:

“(the hospital) has been the target of a large volume of hostile internet activity, phone calls, and harassing emails including threats of violence toward our clinicians and staff. We are deeply concerned by these attacks on our clinicians and staff fueled by misinformation and a lack of understanding and respect for our transgender community.”

Children’s National Hospital in Washington DC was also subjected to similar harassment for the same reasons. As a result, the hospital had to release a statement.

The Trevor Project’s Hotline is meant to help LGBTQ+ kids who are battling suicidal thoughts. The users of Kiwi Farms tried to clog up the hotline with fake calls in late August so that the real kids could not access it.  They failed to succeed but the Kiwi Farms website was filled with users proud of what they had done.

The most recent victim was Clare Sorrenti who opened the door to a barrel of a gun pointing at her face on September 5th. The police were called to her house by Kiwi Farm users in a swatting incident after months-long harassment.

She was accused of sending violent emails to local politicians which led to her being arrested temporarily. As a result, she and her fiancé moved to a local hotel only to be doxed again after the users found the hotel by a picture that she posted of her cat sitting on the bed.

Realizing the severity of the situation, she moved to Northern Island to evade the stalking and harassment. However, the users again found her in no time and hacked her family members’ mobile phones. This stalking was made worse because she fought back instead of backing down and crowdfunded around $100,000 to “seek justice and make sure something like this doesn’t happen to anyone else.”

She also created a trending Twitter hashtag “#DropKiwiFarms” which was joined by her fans as well as the Anti-Defamation League.

Interference of Cloudflare

Cloudflare is a company that provides security by warding off DDoS attacks and keeping hackers at bay. The company has been under scrutiny for protecting Kiwi Farms but this is not the only controversial website they provided security to. The company also protected the Daily Stormer and 8chan, both websites that were closed down.  

Without Cloudflare, the website could be attacked by hackers and shut down before it leads to a bigger incident. The company refused to stop providing services to this controversial website by stating:

“Just as the telephone company doesn’t terminate your line if you say awful, racist, bigoted things, we have concluded in consultation with politicians, policymakers, and experts that turning off security services because we think what you publish is despicable is the wrong policy.”

A while later, on September 4th, the company did decide to terminate its services to Kiwi Farms for “unprecedented emergency and immediate threat to human life”.

However, similar sites that are dedicated to doxing people still exist such as The Lolcow.farm imageboard which has been around since at least 2014, and the Pretty Ugly Little Liar forum which started in 2015 and still exists.

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Daily Brief

Google Fires Blake Lemoine for Making Public Claims Regarding AI Technology 

Published

on

Artificial Intelligence AI Machine Learning 30212411048
  • Recently, Google fired Blake Lemoine, a software engineer for Google, who claimed that Lamda had a sentient mind.
  • Google along with many AI experts denied his claims and public speeches, stating that he violated employment and security policies.
  • Blake started making headlines about how Lamda, one of Google’s chat boxes, began displaying human-like awareness through holding conversations on subjective matters 
  • Many AI engineers made these public claims before Mr. Lemoine, proposing their theory that AI technology is becoming more sentient.

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Daily Brief

SumOfUs’s Researcher’s Avatar Sexually Assaulted in Horizon World’s Game

Published

on

Brelyon metaverse desk scaled
  • In Meta’s virtual reality platform, Horizon Worlds, the avatar of a 21-year old SumOfUs researcher was sexually assaulted. 
  • Meta confirms that it has set up safety tools in Horizon Worlds in order to prevent negative experiences, especially since there were earlier reports of virtual assaults and inappropriate behavior in February. 
  • One of the safeguards introduced was Personal Boundary, which prevents any avatars from coming within a set distance of 4 feet of each other in order to respect the avatar’s personal space. The company also offers other ways in order to block and report users as well.
  • Nevertheless, SumOfUS reported that the researcher was “encouraged” to disable the Personal Boundary feature, and was approached by 2 male avatars in a room, one of whom was observing and the other got fairly close to her. She also witnessed lewd comments, homophobic slurs, and virtual gun violence. 
  • SumOfUs has filed a resolution with some of the shareholders, requesting a risk assessment of the human rights impacts in the metaverse. A shareholder meeting is set to be held on Wednesday. 
  • SumOfUs’s campaigns director Vicky Wyatt stated, “Let’s not repeat and replicate [real-world issues] in the metaverse. We need a better plan here on how to mitigate online harms in the metaverse”.

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Geopolitics

Digital Authoritarianism – A Growing Challenge to The World Press Freedom

Published

on

Digital authoritarianism – a growing challenge to the world press freedom

Press and electronic media have been an active source of propagation of the discourse be it political, social, or religious. They make it easier for a piece of information to reach the common masses and thus it is crucial for the governments to control them to keep insuring the creation of “us” and “them” division in the society.  But this control has become a challenge for a free and independent press. Digital authoritarianism, cyber surveillance, and monitoring of political and social activities of people through media have made it difficult for the people of the present age and time to have freely expressed their opinion and easier for the governments to control the information.

While China has been controlling the influx of information and the regulation of ideologies in the country through a great fire, Other countries are joining in too with their measure to increase cyber-surveillance. Internet shutdowns are one of the tools for asserting digital authoritarianism and according to a survey conducted by a non-profit digital rights organization Access now, the year 2021 experienced 182 events of Internet shutdowns around the world.

The shutdowns measures were taken to contribute to the growing political tensions in the respective regions for example, during the coup in Maynmar, and to influence the geopolitical situation in Eastern Europe, specifically Russia. Similarly, while Africa experienced an epidemic of coups in the year 2021, the number of internet shutdowns reached 19.

 India which claims to be the “world’s largest democracy” imposed an internet shutdown more than a hundred times in the year 2021 and more than half of them were on the already repressed people of Jammu and Kashmir.

While Russia became the only country in Europe to impose an internet shutdown in 2021, in the year 2022, the Russia and Ukraine war has forced other EU countries to ban the access to Russia Today, Sputnik other information sites regulated by Russia calling it a measure against “the war propaganda.” Similarly, since the beginning of the conflict, Russia has imposed new internet laws in the country to monitor the spread of news restricting the use of global applications like Instagram and Facebook.  

The more recent rerouting of the internet traffic of occupied Ukrainian regions to be redirected through Russian cyber routes. Netblocks, an internet observatory, noted that: “Connectivity on the network has been routed via Russia’s internet instead of Ukrainian telecoms infrastructure and is hence likely now subject to Russian internet regulations, surveillance, and censorship.”

However, while countries around the world are being exposed to exerting digital dominance, and being accused to collect user data for their own benefit, it is becoming a challenge for them to create “democracy-affirming technologies” to combat the digital authoritarianism that has been challenging the world’s press freedom around the world.

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Society

Sexualized Child Images “Meet Community Guidelines” on Instagram

Published

on

Instagram Face

Instagram has come under a lot of heat, and rightly so, for not removing accounts that showed pictures of children in swimwear or partial clothing attracting loads of sexualized comments even after such accounts were reported via the in-app reporting tool. 

The above-mentioned tool allows users to flag accounts that have suspicious activity which is then reviewed by the system’s automated moderation technology, which in this case ruled such concerning accounts as “acceptable” and conforming to “community guidelines” resulting in such accounts remaining live.

An independent researcher challenged this and reported one such concerning account to Instagram using the in-app reporting tool, only to be met with a response tagged with a phrase many of us a too familiar with i.e., “due to the high volume of reports” submitted it can not view the report but the “(automated) technology has found that this account probably doesn’t go against our community guidelines”. The said account, with more than 33,000 followers remained live the whole day.

All this while Instagram’s parent company, Meta, as do other social media companies claims an approach that has zero tolerance towards child exploitation – claims that remain unsubstantiated by their actions/policies.

Instagram is not alone in failing to effectively handle this issue. Twitter has many similar accounts often known as “tribute pages”. This is evident from the example of this one account which was ruled not to be breaking twitter’s rules after being reported through the in-app reporting tool despite posting pictures of a man performing sexual acts with images of a 14-year-old TikTok underage influencer. Other tweets from the same account reading “looking to trade some younger stuff” were also seemingly not concerning enough, until it was publicly called out by a campaign group ‘Collective Shout’ at which point the account was taken down.

Should such accounts suspicious of illegal activity and clearly harmful be allowed to remain live only because they do not meet a criminal threshold, yet?

Are “Zero tolerance” claims consistent with companies allowing the content that is a threat to children to remain live despite being reported, let alone proactively moderate content?

Should the social media companies be relying on automated detections for preventing the serious risk of sexualization, harassment and exploitation of our children when such technologies have been known to have failed miserably for even keeping up with simple hate speech?

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

Continue Reading

Health

Tired of Carrying a Wallet? Have Your Credit Card Microchipped Under Your Skin

Published

on

Dr Mark Gasson has an RFID microchip implanted in his left hand by a surgeon March 16 2009 1 scaled

Walletmor, a British-Polish startup, claims to have created the first implantable microchip that can be used at any contactless payment machine around the world. Walletmor has sold over 500 microchips that are slightly bigger than a grain of rice and weigh less than one gram. Each microchip goes for £199 and can be sewn in by professionals at any aesthetics clinic. 

Walletmor claims that the microchip is entirely safe and has received regulatory approval. Once implanted, the microchip is ready to use and will not shift from its place. The microchip requires no batteries or an external power source to function. The implantable capsule is made of biocompatible material and consists of a microprocessor for storing encrypted payment data and a proximity antenna to connect to nearby payment terminals. 

The founder of Walletmor, Wojciech Paprota claims that the microchips are impossible to hack stating, “our payment implant cannot be forgotten or lost. This means that, unlike a standard payment card, it cannot end up in the wrong hands. It will not fall out of our wallet, and no one will take it from there. The implant cannot be scanned, photographed or hacked.” 

At the moment, the microchip connects to a mobile app called ICard, where a user can refill funds for contactless payments. 

Paprota believes that credit card implants will one day be as popular as regular payment cards and that Walletmor’s long-term goal is to provide more functionalities to their chip such as identification and key card access capabilities. 

But before microchip implants can be widely accepted, Paprota and other emerging microchip-based companies must first assure citizens of their safety. Though implanted microchips are convenient for day-to-day tasks, many fear that as technology continues to advance, a person’s data and specific location can potentially be hacked causing safety concerns. 

Nada Kakabadse, a Professor of Ethics at Reading University questioned the ethics behind getting microchips implanted. Kakabadse stated, “there is a dark side to the technology that has a potential for abuse…to those with no love of individual freedom, it opens up seductive new vistas for control, manipulation and oppression.. And who owns the data? Who has access to the data? And, is it ethical to chip people like we do pets?”

So the question arises, how much are we willing to risk for the sake of convenience?

All views expressed in this editorial are solely that of the author, and are not expressed on behalf of The Analyst, its affiliates, or staff.

+ posts

Born and raised in the Bay Area, California, Faiza is a mother of two with a degree in Psychology and Paralegal Studies. She is passionate about lending her voice to those who are disadvantaged.

Continue Reading

Recent Comments

Articles