Method 'Practical Logic'



Principles of Information



AI's Evolution in the mid 2020's



C.P. van der Velde.

[First website version 02-04-2025]


1.

 

Introduction



By the mid 2020's, artificial intelligence (AI) has carved a distinct path from its origins, reshaping technology's role in daily life. No longer a mere buzzword or a collection of basic tools, AI manifests as a dynamic force, integrated into everything from chatbots like ChatGTP, Grok and DeepSeek to systems forecasting weather or analyzing social media trends. This evolution distinguishes itself from traditional software, draws from vast data reservoirs, organizes itself in ways still being unraveled, and grapples with risks that keep developers vigilant. This essay explores AI's state today, its unique traits, and the challenges it faces.

2.

 

Beyond Traditional Software



Present day AI diverges sharply from the software of decades past. Traditional programs operated on fixed rules - consider a thermostat programmed to activate at 20°C, with every action scripted by developers. Change required rewriting code. By contrast, AI, particularly large language models (Natural Language Processing, NLP), learns from data. Rather than following a rulebook, these systems train on massive text corpora, discerning patterns to converse, explain, or predict. This adaptability enables AI to handle complex, human-centric tasks - translating colloquial phrases, summarizing social media debates, or drafting essays - outpacing rigid, rule-based systems with adaptive, data-fueled reasoning.
The shift is clear: from static instructions to fluid inference.

3.

 

Where the Data Comes From



The fuel for this learning spans a broad, eclectic mix. Upto now, AI draws from web crawls of public sites, digitized books and papers, real-time social media posts, and licensed datasets such as news archives or research collections. Continuous updates from web indexing and social media platforms keep systems current while user interactions sharpen their responses. The scale reaches trillions of words and images, from solid reports to wild tangents, merging mainstream perspectives with fringe outbursts. Yet, transparency lags concerning source details, competing views or legal constraints - leaving a murky stew of credible and questionable inputs.

4.

 

How AI Builds Itself



Unlike traditional software's premeditated, stepwise scripts, AI structures data through machine learning . So-called "Neural networks" - layered algorithms inspired by brain-like connections - process raw inputs, identifying patterns: "rain" aligns with "wet", due to frequent pairing. Developers set training objectives and curate datasets, but the systems largely self-assemble their models, predicting via estimates of probabilities. Rules aren't coded; the assumed probabilities emerge from training. AI grows more than it is built.

5.

 

The Risks of Noise and Bias



Present day AI method - learning from examples rather than predefined steps - allows rapid adaptation, though there are serious drawbacks too.

a.

 

Systemetic Noise Parrotting



First, the data-driven strength carries risks. The entire knowledge base of present AI is filled with trillions of examples, many offering useful information, but also loads of nonsense. Vulnerability rises from almost total reliance on frequency - likes, views, or repetition - to determine weights and priorities of data elements and relations. The present concept of AI systematically prioritizes 'engagement' over truth, reliability and validity - the principe of "popular truth" - to the infamous effect of "Garbage In, Garbage Out" (GIGO).
That method threatens to mold AI into an artificial "consensus parrot", reflecting common ignorence concerning scientific principles, mainstream hype (MSM's relentless "crises") or alt-media delusions (conspiracy torrents). It might overgeneralize from vocal minorities, mistaking noise for signal. It can absorb misinformation - fake news from social media, lies from bot farms, or viral hoaxes - and present it as fact.
AI risks mirroring human chaos, not reason.

b.

 

Systemetic Blurring Bias



Also, the actual flow of reasoning within present AI systems remains unchecked and largely opaque, even to engineers.
Items are associated, clustered and linked according to highest so-called "resemblences", largely determined by casual coincidences in input data accidentally available at present. (Although AI engineers often refer to these resemblences as "implicit correlations", there are actually no such metrics used or calculated - they'd better be called "simulated correlations").
Thus, rules derived and inferences made become bidirectional, presupposing logical equivalence or causality. This risks blurring essential distinctions like sufficient and necessary conditions, or neglecting possible other covariates like intermediate links and common factors.
Thus, fallacies creep in: like conflating correlation with causation or chasing popular sentiment over evidence.
This way, the system will sometimes produce outcomes that appear completely weird even for the dumbest human user.

c.

 

Systemetic Zero Predictability



In some tasks AI systems of today perform marvellous, like flashing-fast producing flowing text in a smooth, popular style with a very natural, human "feel" to it. The problem however is - to summarize - that as a user, you never know when or where it will go rampant. Generally speaking, this is the definition of zero predictability. Thus, reliability can only be established after-the-fact, through scrutunous error-checking, and will often turn out below sufficient, in some cases net zero.

6.

 

Filters to Catch the Junk



Efforts to counter these reliability problems rely on filters, though they remain imperfect. Mid 2020's, tools exist to catch errors and fallacies, however still sparsely applied, each with limited scope and far from sufficient.

Language


Manipulative language - MSM's "urgent doom" or alt-media's "secret plot" - triggers keyword-based alerts, but subtle framing evades capture.

Facts


Fact-checking APIs cross-check claims - e.g., "Earth is actually flat" - against reliable databases, flagging outright falsehoods. Statistical scans detect bot-driven noise or outliers, like claims echoed only by spam.

Logic


Logical fallacies - like strawmen or slippery slopes - mainly depend on pattern recognition from philosophy texts, missing the overall range as recognized in formal logic and nuanced twists in context.

Causality


Cause-effect errors, like assuming "ice cream sales cause drownings," face context tests, yet deeper causal analysis often lags.
Scans for statistical outliers (e.g., bot-driven noise) and probes cause-effect claims (e.g., " cold causes flu"). Testing correlation isn't enough though - causal analysis, always needed, demands rigorous methodology. For N=1 incidents, extensive testing on covariates in real world context is needed, but is still in its infancy in present AI.
Intent's irrelevant - quality rests on language patterns decoding semantic structure (e.g., "global plot " implies secret coordination, raising proof demands). Effect outweighs motive: a claim's disruption potential heightnes the proof bar, like in law. Here's how AI tackles key flaws:

Psychology


Psychological traps - pseudo-diagnostics ("you sound anxious") or thin advice ("just breathe ") - hit blocks in cautious systems, but alt-media's bold leaps ("this cures all") slip through untagged.

Frequency doesn't equal truth - human oversight and source variety bolster checks, but scale hampers precision. Filters trim the worst, yet by far not all.

7.

 

What It Means



AI's evolution mid 2020's offers a dual reality. It stands as a powerful, adaptive force - current and embedded in tools - far beyond traditional software's rigid bounds. It taps a global data well, self-organizes, and strives to sift truth from noise.
Yet flaws persist. Misinformation, bias, and shaky reasoning reflect the messy data it feeds on, but also the inadquacy of its correlation driven mechanisms. Filters improve - catching lies, noise, and fallacies - but fall short of sealing every gap. AI emerges as a tool, not an oracle, balancing breakthroughs with considerable challenges still unfolding.