Hosting
Monday, February 24, 2025
Google search engine
HomeArtificial IntelligenceTech Tonic | At what point does all AI become too much...

Tech Tonic | At what point does all AI become too much AI?


Let me start with something that is somewhat concerning. Elon Musk, owner of AI) models. Have you registered for this? I take this as an illustration of something quickly getting out of hand. Is the AI ​​envelope around everything we do becoming thicker than the Earth’s ozone layer?

PREMIUM
An AI (Artificial Intelligence) board is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China, July 6, 2023. REUTERS/Aly Song/(REUTERS)

“The recipients of the information may use it for their own independent purposes in addition to the purposes stated in X’s privacy policy, including, for example, to train their artificial intelligence models, whether generative or otherwise,” reads mention of a mechanism to opt out of sharing this data, but as of now there’s no setting or switch suggesting how to do that. Perhaps an Elon Musk humanity-saving tweet will shed some light on that in the coming weeks.

There was a simpler time when our collective data was collected on the World Wide Web to serve us advertisements, making money circulate and multiply for corporations. Data was the new oil, they said at the time. Data is now the new oil. Beyond advertising, AI models only represent the next stage of technological evolution. Whoever has supremacy has ultimate supremacy.

At this point, a question is burning within: at what point does all this AI become too much AI?

I thought about this (although it had nothing to do with X’s latest unforeseen but not entirely surprising disappointment, which happened later) when Adobe outlined the new capabilities of its apps, including Photoshop, Lightroom, Premiere Pro and others, during the keynote and briefings at their annual MAX conference. Most of the new things that are part of the latest batch of major updates are underlined by AI and their Firefly models. Video generative AI is the next big thing. I described this in detail in my pieces from the trenches.

Throughout the three main phase sessions, including the keynote and all the briefings I was given access to, the company left no stone unturned to make the case for Firefly and broader AI use. It’s great to see Gen AI useful in cleaning up our photos (removing wires from cityscapes and architecture is great) and helping fill video editing timelines with fast generations. But as I asked Deepa Subramaniam, Vice President, Product Marketing and Creative Professional at Adobe, does this change the definition of creativity?

“For me, editing in Lightroom isn’t just about getting the photo I want, it’s about reliving that photo through editing and tapping into the nostalgia,” she told me. Her view is that someone using these tools should have the keys to enable creative decision-making. Whether or not they want to remove those pesky and eyesore power lines that take away from the beautiful architecture you just photographed. Or to enhance the texture and color theme of the sky as you saw it at sunset, rather than how the phone’s camera decides to process it. To do it or not, it must remain a human decision; the option should be there, that’s Adobe’s view on the matter.

Yet it may not be that simple. Generative fill for photos uses AI to add background and expand a frame that might not have existed or that the human eye couldn’t see. That’s one side of the coin. On the other hand, professionals who use Adobe Illustrator and Adobe InDesign software will disagree that too much AI is a bad thing. ‘Objects on the go’ for example, or even generating textures, images, patterns or images – within a shape, vectors or even letters. You may have a valid argument that the typical skills you would expect from a designer may no longer be necessary between these powerful software tools and the end result. Any person, with any sense of aesthetics and design, could do the job?

Maybe that’s the point. AI can and should simply remain a tool. With human supervision, if necessary. The use case for Adobe’s tools, Canva’s tools, Pixelmator’s AI editing options, Otter’s AI transcriptions for audio recording, or even Google’s AI summaries in Search, could cause a human to take corrective action takes when necessary. But do we do that too?

This brings me back to an article published in Nature earlier this year, which discussed how AI tools can often give users the false impression that they understand a concept better than they actually do. One, willingly or with limited skill and understanding, takes the other along to blissfully walk the same path.

“People use it, even if the tool produces errors. One attorney was struck down by a judge after submitting a letter to the court containing legal quotes that ChatGPT had completely fabricated. Students who submitted ChatGPT-generated essays have been caught because the papers were “quite misspelled.” We know that generative AI tools are not perfect in their current iterations. More people are beginning to understand the risks,” Ayanna Howard, dean of the College of Engineering at Ohio State University, wrote for the MIT Sloan Management Review earlier this year.

The examples she cites are those of Manhattan attorney Steven A. Schwartz and students from Furman University and Northern Michigan University. That puts a spotlight on the more liberal use of generative AI tools, such as chatbots and image generators, which most people use without further due diligence or research into the output provided. AI has been wrong more than once.

The funny thing is that more and more people realize that AI is not always right. Likewise, human intelligence doesn’t seem to identify and correct these errors as often as it should. One would expect that the lawyer and students mentioned in Howard’s illustration would have done this as well. These are specific, specialized use cases. Yet people in that series took the core principles of a typical AI field too seriously: human-level intelligence and time savings.

For technology companies presenting new platforms, updates or new products, there is obviously pressure from more than one dimension. They must keep up with and even exceed the competition. Apple had to do it, even though not everyone who bought their latest iPhones still has the Apple Intelligence suite. Google had to do it, and Gemini is now finding deeper integration into more phones once Samsung’s exclusivity period is over. Microsoft is investing heavily in OpenAI, which is why any commotion about the latter has also become a source of concern in Redmond.

Also, you should see them talking about all the cutting edge stuff that helps stock prices (well, mostly) and keeps investors happy. I talked about Adobe’s extensive AI pitch. Their landscape includes increasing competition from Canva, which has its own smart AI implementation that is paying off (expect the recent acquisition of Leonardo.ai to result in new tools), competition from tools that do specific things, and investors should still recall the $20 billion acquisition of Figma. which was abandoned late last year.

None of this is easy. Therefore, the next question to ask of generative AI is: Can AI solve the mess that AI creates? Unlikely.

Vishal Mathur is technology editor for Hindustan Times. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice versa. The opinions expressed are personal.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular