Say Hi To The New UI: AI Agents Shove Aside Clunky Keyboards And Unwieldy Mice

  • The Rise Of Consumer-Friendly UIs
  • Hands-Free Interactions
  • Breaking Down Cognitive Barriers
  • Computer Experts Everywhere The keyboard was the standard way of using computers for much of the late 20th century, but with the rise of point-and-click mice, the ease with which people could interact with software was dramatically improved. And it was an even bigger improvement on the old electronic punch cards that dominated the scene before the keyboard’s arrival

The story of the computer user interface is one of constant evolution, where machines have become easier than ever for people to use, making them accessible to billions all over the world. And it’s a story that has not yet come to an end, for the rise of artificial intelligence is set to trigger another leap in terms of computer UIs. It will make the most complex tasks, ranging from programming to the execution of complex financial trading strategies, something so trivial that anyone can perform them

The Rise Of Consumer-Friendly UIs

The keyboard first emerged in the 1960s, and it was the dominant UI for more than two decades, until a man called Steve Jobs paid a visit to Xerox Parc one afternoon in 1979, and an idea was born

The founder of Apple Inc. said that the trip inspired for him to develop the GUI and the mouse, which became iconic innovations with the launch of the first Apple Macintosh computer in 1984

Mice quickly became ubiquitous with every type of computer, and the GUI was given an almighty boost with the rise of Microsoft and the Windows operating system, sparking a dramatic acceleration in computer accessibility

Over the last thirty years, we’ve seen non-stop improvements in the way people interact with their computers. Apple has been at the forefront of many of these advances, integrating innovations such as trackballs in its mice and trackpads on its laptops. It was joined by the likes of IBM, which debuted pointing sticks dubbed the “TrackPoint” on its ThinkPad laptops, creating a more seamless, all-in-one keyboard and mouse

It was Apple who popularized the idea of the mobile device with the introduction of the first iPhone in 2007, including a touchscreen that didn’t require a stylus. It gave birth to concepts such as swiping and pinching to zoom, though Apple didn’t actually invent this UI. Rather, that honor goes to a company called Fingerworks that was acquired by Jobs and co. in 2005

Hands-Free Interactions

Touchscreens soon became all the rage, with smartphones and then tablets becoming ubiquitous among consumers, but they were very quickly followed by yet another new UI

Voice-based controls are almost as old as touchscreens themselves. The technology was first brought to the masses by a company called Dragon, creator of the Dragon NaturallySpeaking software program for voice dictation. Dragon sold millions of copies of its software globally, but despite this, it failed to successfully popularize the technology

Instead, voice-based UIs only really began to take off with some of the earliest forms of AI. Once more, Apple played a major role, introducing its voice assistant Siri, which could help users to look things up on their iPhones and answer their questions, minimizing the need for them to use their hands. Siri was later joined by Google’s Assistant and Amazon’s Alexa, and together they expanded the capabilities of voice-powered assistants, introducing the ability to navigate with maps, read audiobooks, and so on

The next big UI change was the rise of virtual and augmented reality devices, which have perhaps not yet fully taken off. At present, only a few consumers and businesses have begun to experiment with such devices, which enable a more immersive form of interactivity in applications ranging from gaming to engineering to architecture to video calls and social networking

Other new UI concepts that are just beginning to make waves include brain-computer interfaces, which allow people to engage with machines using thoughts alone. They work by placing electrodes on the user’s head, interpreting electrical signals to deliver commands to computers. The tech is still in its infancy, but it has enormous potential, especially in terms of boosting accessibility for disabled persons

Already, people have begun to wonder about the possibility of combining brain-computer interfaces with VR devices. But it’s the combination of older UIs, such as keyboards and voice controls, with new forms of artificial intelligence that could represent the next major advance

Breaking Down Cognitive Barriers

The generative AI boom ushered in by ChatGPT has now given way to the rise of AI agents, which are more advanced large language models that go much further than just responding to questions. Instead of providing answers, these newer AI models can take actions on behalf of their users, creating the potential for a fresh wave of automation

AI agents are combined with keyboards and voice recognition technologies. Essentially, the user types out a command or just tells the agent what they want it to do, and then the agent sets about completing the requested task. They will lead to significant changes in the way people interact with machines, eliminating the need for people to perform repetitive operations. In addition, they can also dramatically simplify tasks that may otherwise be far too complex for humans to perform by themselves

In this area, it’s not Apple that’s leading the way. Instead, that honor goes to AI startups, especially in the area of finance. The AI agent developer Giza, for instance, is pushing its idea of "xenocognitive finance”, where human intent is amplified by AI agents, allowing users to do all kinds of advanced things that simply weren’t possible before

What Giza is trying to do is eliminate the cognitive overload that comes with financial investing. Trying to follow the financial markets is an impossible task, requiring cognitive abilities that humans simply don’t possess. Who, after all, can track thousands of different asset markets for 24 hours a day, zoom in on different trends, watch price charts for various “signals” that might indicate a breakout, and then make split-second decisions and execute on them, when they often have only seconds to do so?

Giza gets around this through the use of AI agents, which are capable of tracking hundreds of markets and thousands of financial assets in real-time in order to identify the most promising and profitable trading strategies and execute on them in real time. Giza likes to call this approach “cognitive ergonomics”, and what’s especially compelling is that its technology can be used by anyone, including hobbyist traders, to engage in sophisticated trading strategies that were previously only accessible to institutional investors

It’s just one example of how AI agents are evolving to become yet another form of UI, and as they become more common, they will break down the complexity barriers associated with many tasks. AI agents can enable anyone to become a professional researcher, for example. OpenAI recently debuted its Deep Research AI agent within ChatGPT, giving users the ability to perform intensive research on almost any subject and generate a comprehensive report in a matter of minutes, just by letting it know what they want to investigate

Besides becoming a professional analyst, AI agents can also help anyone create software. By using a tool such as Anthropic’s Claude Code, anyone can simply describe a software application they have in mind and ask the agent to build it. It simplifies the entire process to the point where users don’t have to write a single line of code. They can describe the graphical interface they want, including the colors and shapes, outline all of its functions, add background music, and more, using spoken commands

Computer Experts Everywhere

The rapid evolution of computer UIs reflects our desire to make powerful technologies more intuitive, so that more people can use them. Back in the 1950s, when punch cards were required to interact with room-sized machines, only a handful of people in the world had access to them. Keyboards made computers accessible to thousands, then the mouse paved the way for millions of them to appear in homes and offices all over the world. With touch-enabled smartphones and mobile devices, we’re now talking billions of users

Nowadays, almost everyone has access to some kind of computer, and in the future, advances in the UI will make it possible for those users to do much more with those devices. AI agents promise to be the next big thing, paving the way for just about anyone to do virtually anything

SAY6.78%
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)