Hosting
Monday, February 24, 2025
Google search engine
HomeArtificial IntelligenceShould we be afraid of artificial intelligence?

Should we be afraid of artificial intelligence?


Maybe a little, according to Kevin Matsui, director of the Center for Advancing Responsible and Ethical Artificial Intelligence

Artificial intelligence is becoming increasingly embedded in our lives. Should we be afraid?

Probably a little. But not to the point where we live in fear and avoid all technology, Kevin Matsui said Wednesday evening at the main branch of the Guelph Public Library.

Matsui is director of the Center for Advancing Responsible and Ethical Artificial Intelligence (CARE-AI) at the University of Guelph.

In his talk, he addressed burning questions about AI – whether it will take away people’s jobs, spread disinformation, and whether it is generally something to be feared or embraced.

The event was the first part of the Friends of the Guelph Public Library Open Minds Fall Speaker series, which is free to the public.

While there are certainly things we should be concerned about, he said, “It’s just like any other technology.”

In fact, most of us have probably been working with AI for longer than we think.

For example, streaming services such as Spotify and Netflix use AI to make targeted recommendations. It is also used in the sports industry to analyze game footage and improve the fan experience, as well as player performance and safety.

The Moderna vaccine even used AI in its development and testing. Drug development normally takes a decade, but Moderna did this in just a few weeks by using a design approach that uses data to make predictions.

Future drug development will rely heavily on artificial intelligence to speed up development and combat things like antibiotic-resistant pathogens, he said.

It can also help monitor insect biodiversity, predict what type of soil to grow a crop in, help analyze medical imaging, predict food prices, personalize education and much more.

Still, AI probably won’t replace jobs – at least not as many as you might think, and not right away.

For example, while AI is becoming more prevalent in virtually every workforce, physical jobs that are not heavily affected by AI are not really at risk. This is because non-repetitive physical tasks, such as welding, are unique to each situation.

Nurses could benefit from using AI to ensure people take their medications in the correct doses, or radiologists could benefit from AI help interpreting scans, but these jobs still require physical humans and these are unlikely to be replaced, he said.

Occupations most at risk of being replaced are data entry or text-related jobs with a higher degree of repetition. But Matsui said that probably won’t happen anytime soon.

So what are the concerns?

An increasing difficulty in distinguishing what is real and what is not is one. One participant referenced a video that appeared real but turned out to be AI-generated, spreading misinformation.

That’s why he said we should treat it like any other scam, with caution, skepticism and fact-checking.

Another was how to prevent people from passing off AI writing as their own.

While he said it will become increasingly difficult to tell what has been developed by AI, universities are combating this by changing the way they evaluate – for example, requiring essay writing to take place in person.

Many are also concerned that AI is being used for nefarious purposes – a responsibility he believes lies with the government.

“The legislation needs to catch up and really anticipate some of the problems ahead,” he said.

As AI takes the world by storm, he said everyone is suddenly an expert, which is causing misinformation to spread. When Boeing 737 max planes crashed while using autopilot in recent years, some blamed AI – but that was actually due to poor engineering and poor safety culture.

To combat disinformation, he says, it is important to distinguish between what is and is not AI: a calculator, an automated vehicle and cruise control are not AI technology; they are pre-programmed.

It’s also worth noting that generative AI, like ChatGPT, is just the tip of the iceberg when it comes to AI as a whole, and may not be as big an industry as you might think.

“I’m right to be concerned, but not petrified yet,” he said.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular