Changes are occurring at an incredibly rapid rate in the technology and scope of computer systems. Remarkable advances are being made in artificial intelligence, the mass interconnected small devices known as the “Internet of Things,” and wireless connectivity. Unfortunately, these improvements bring not only benefits but also potential dangers. To have a secure future, we need to anticipate what’s going to happen with computing and act early. So what do experts think will happen next, and what can be done to prevent serious problems?
To answer that question, our research team from the universities of Lancaster and Manchester turned to a science of looking into the future called ‘forecasting’. Although no one can predict the future, we can put together predictions that explain what will happen based on current trends.
In fact, long-term predictions of technology trends can sometimes prove surprisingly accurate. And a great way to get predictions is to combine the ideas of many different experts and find points of agreement.
We consulted 12 expert “futurists” for a new research paper. These people are responsible for long-term forecasting of the impact of changes in computer technology through 2040.
Using a technique called a Delphi study, we synthesized the futurists’ predictions into a set of risks and recommendations for addressing those risks.
I. Software Concerns
Experts have predicted rapid advances in artificial intelligence (AI) and connected systems, ushering in a world far more computer-driven than today. But surprisingly, they expected little impact from the two much-touted innovations. Blockchain, they suggested, is a way of recording information that makes it impossible or difficult to manipulate systems and is largely irrelevant to today’s problems. Quantum computing is still in its infancy and is likely to have little impact for the next 15 years.
Futurists have highlighted three major risks associated with the development of computer software:
1. AI competition leads to trouble
Our experts suggest that the attitude of many countries towards AI as a field in which they want to gain a competitive technological advantage will encourage software developers to take risks in the use of AI. did. This, combined with the complexity of AI and its potential to exceed human capabilities, could spell disaster.
For example, imagine that a shortcut during testing leads to an error in the control system of a car manufactured after 2025, and the error goes unnoticed in the complex programming of the AI. Associated with a specific date, a large number of cars could start behaving abnormally at the same time, potentially resulting in the death of many people around the world.
2. Generative AI
Generative AI could make it impossible to determine the truth. For many years, photos and videos have been very difficult to fake, so we expect them to be authentic. Generative AI is already fundamentally changing this. The ability to create convincing fake media is expected to improve, making it much harder to tell whether an image or video is real.
Suppose that people in positions of trust, such as respected leaders or celebrities, use social media to display authentic content, but sometimes also incorporate convincing fake content. For those tracking them, there is no way to tell the difference. It is impossible to know the truth.
3. Invisible cyber attacks
Finally, the sheer complexity of the system being built (a network of systems owned by different organizations, all interdependent) creates unintended consequences. Getting to the root of why things are going wrong will be difficult, if not impossible.
Imagine a cybercriminal hacking an app used to control devices such as your oven or refrigerator and turning them all on at the same time. This causes a sudden increase in demand for electricity on the power grid, causing large-scale power outages.
For utility experts, it can prove difficult to even identify which device is causing a spike, let alone determine that all devices are controlled by the same app. Cyber sabotage becomes invisible and indistinguishable from regular problems.
II. Software Jiu Jitsu
The purpose of such predictions is not to instill alarm, but to enable you to start addressing the problem. Perhaps the simplest suggestion put forward by experts was a kind of software jiu-jitsu, or using software to defend and protect itself. You can make a computer program perform its own safety audit by writing additional code to verify the program’s output, code that checks itself.
Similarly, it can be argued that methods already used to ensure secure operation of software continue to apply to new technologies. And it’s important that the novelty of these systems not be used as an excuse to ignore good safety practices.
III.strategic solutions
But experts agreed that technical answers alone are not enough. Instead, solutions will be found in the interaction between humans and technology.
We need to build skills and new forms of cross-disciplinary education to address these human technology problems. And governments should establish safety principles for their own AI procurement, legislate AI safety across the sector, and encourage responsible development and deployment methods.
These predictions provide various tools to deal with possible future problems. Deploy these tools to realize the exciting promise of the future of technology.
Also read today’s top stories:
Policing X! Elon Musk’s Company X, formerly known as Twitter, plans to build a new “center of trust and safety excellence” to tighten content and safety rules doing.find out what it is here. If you enjoyed reading this article, please forward it to your friends and family.
Apocalypse created by AI? Here are the top technology risks we will face by 2040: AI competition, GenAI, and invisible cyberattacks.jump in here. Did you find it interesting? Go ahead and share it with everyone you know.
A bad apple? Under the block’s new rules, starting in early March, developers will be able to offer alternative app stores on the iPhone and opt out of using Apple’s in-app payment system, which carries fees of up to 30%. It becomes like this. However, Spotify is not happy with this change.check all here.