The rise of AI : Story of a future, dreams and a possible nightmare
Jordan R Toijamba *
Picture generated using Open AI
American Psychologist and behaviourist BF Skinner expressed his idea through these words "The real question is not whether machines think but whether men do." I discovered this important quote during a reading, and it led me to create this writing. Some visionary sages proved able to predict the modern world that exists today.
AI Wasn’t Always This “Cool”
Human beings held exclusive rights to intelligence for extended periods of human existence. The human capacity to think critically and solve problems served as the critical feature that distinguished us from other living beings. The development of machines to reproduce human abilities started with basic functions before accomplishing sophisticated processes which contemporary humans find difficult to comprehend.
Mainstream technology first adopted AI as a novel concept that people did not perceive as threatening. At that time people largely viewed AI primarily as an automation tool which performed basic operations but did not pose any threat to genuine human decision processes. AI systems from early time periods proved to be insufficient.
Most people remember Clippy as the Microsoft Word paperclip assistant that users encountered years ago. Microsoft introduced Clippy as one of the first AI-based virtual helpers in the past which provide headache rather than ease to users. Further technical development led to the introduction of Siri and Alexa yet these programs demonstrated substantial yet restricted capabilities.
A transformation occurred during the past ten years. AI became more than a simple tool when it developed into an advanced capability for decision-making which also allowed it to influence social structures and independently generate content.
Artificial Intelligence Control Systems remains active throughout our daily existence. AI established its presence in everyday life while ChatGPT remained unrecognized before it gained popularity. It exists in every small thing we do. AI controls the basis of your social media experience by determining post visibility that alters your views unconsciously.
Amazon can provide a perfect suggestion for what you intend to buy from your online purchases. The computer system monitors all your interaction points. Organizations implement AI-based hiring applications that initially evaluate job candidate resumes apart from human involvement. The fraud detection systems operated by your bank monitor your spending to detect potentially suspicious actions.
The technology that controls our daily lives exists currently even though people view AI as a thing of the future. Artificial Intelligence continues to be present in our society because it has already arrived.
The Danger of AI Making Decisions for Us
AI carries its central problem because mankind has started transferring vital choices to artificial systems. The defined conditions determine how Artificial Intelligence chooses and presents content to users.
Artificial Intelligence selects information for news broadcasts as well as the content users view on social media platforms. AI determines our social interactions. We give AI our trust completely without asking any questions about its operation.
People tend to trust AI-generated decisions rather than human decisions despite AI’s incorrect outputs according to scientific findings published in 2018. Why ? Machines operate unbiased because they exclusively handle data-based operations.
But AI isn’t free of bias. Studies prove that AI systems amplify prejudices which already exist in the datasets they use for learning purposes. Reports verify the discriminatory behaviour of AI-powered hiring tools against women and minorities during recruitment processes. Research on facial recognition misidentification proves that systems show higher rates of incorrect identification to coloured people compared to white individuals.
AI serves as an observational tool reflecting how society incorporated prejudices into its technology development process because numerous documen- ted cases show its real-world operational mistakes.
The technology merely functions as a reflection tool that shows the faults society embedded into its creation. AI demonstrates real-life mistakes in its operations as illustrated by multiple documented situations.
Real-life examples demonstrate how AI systems have generated serious mistakes as follows.
1. Self-Driving Cars and Fatal Accidents
Self-driving AI has occupied Tesla and a range of other car manufacturers for numerous years of development. The technology shows impressive capabilities but does not reach a perfect level.
A self-driving Uber vehicle operating in Arizona collided with an individual who died because its AI system failed to identify her as a human being. The AI system recognized the individual as an unrecognized entity thus lacking to activate its safety mechanism.
A Tesla Autopilot system missed identifying a white truck moving across the highway as it caused a series of events resulting in a death on the roadway. Training of the AI focused mainly on vehicles with dark colours yet failed to identify the white truck as a hazardous object.
The extensive flaws of AI systems emerge even in its most advanced forms as evidenced by severe consequences arising when errors lead to death.
2. AI-Generated Misinformation and Deepfakes
The creation of bogus content which fools human perception stands as one of the primary security hazards that AI presents to society. Deepfake technology developed through artificial intelligence can produce realistic video recordings from non-existent statements of actual people thus affecting political communications alongside news organizations and social trust systems.
A world leader making war declarations through deepfake video footage would represent a realistic example. A computer-generated voice recording of a CEO spreading incorrect financial data leads to market value decline.
AI advances at a rate faster than the existing regulations which govern its use. The technology progresses past legislative expectations by the time a new law is established.
What Needs to Be Done - Right Now
Immediate action becomes essential in order to prevent AI from gaining dominance over human direction. Here’s what needs to happen:
1. AI Ethics Committees must become the standard practice since each major AI company needs to maintain independent ethics boards who examine AI program creation and deployment decisions.
2. The deployment of autonomous weapons should be completely prohibited before their systems become beyond human control.
3. The general public requires education about AI control of their day-to-day existence to ensure they reach informed decisions.
4. AI businesses should disclose their algorithm operation methods and data requirements according to mandated disclosure standards.
5. Data privacy laws need strengthening because AI systems should not possess free access to all personal information.
Hope or Fear ?
AI functions as both an object of hope and an object of fear. AI possesses the ability to transform medical fields and tackle worldwide problems throughout the society while creating more convenient systems.
However, it simultaneously enables surveillance of private information and disseminates false information through its operations that bypass human supervision. As Spider-Man once said, “With great power comes great responsibility.”
We need to ask ourselves whether our capabilities match the level of power AI possesses for control. Creating technology beyond our control becomes more likely when we fail to handle its power correctly.
The phenomenon already exists. The world has witnessed manipulations from deepfake scams together with fake news and AI-generated propaganda which have distorted public opinion globally as well as elections across the globe. Researchers ask the fundamental question about whether AI systems can consistently receive management and control. Our ability to manage AI depends on the level of its profoundness.
Experts advocate stopping and potentially freezing artificial intelligence development because they think it could lead to unanticipated negative results. The only way to guarantee AI provides positive outcomes rather than negative ones according to some experts is through increased regulation.
* Jordan R Toijamba wrote this article for The Sangai Express
The writer is a freelance writer.
He is currently pursuing PhD. in Manipur Institute of Management Studies
This article was webcasted on March 30 2025.
* Comments posted by users in this discussion thread and other parts of this site are opinions of the individuals posting them (whose user ID is displayed alongside) and not the views of e-pao.net. We strongly recommend that users exercise responsibility, sensitivity and caution over language while writing your opinions which will be seen and read by other users. Please read a complete Guideline on using comments on this website.