AI Transformation Brief – January 2025

Author(s): Bill Wong

The latest AI announcements and offerings from Meta, OpenAI, Apple, California State, and the White House.

VOL 13: January 2025

INFO-TECH AI INSIGHTS

AI Transformation Brief

Featuring AI best practices and insights to enable our members to strategize, plan, develop, deploy, manage, and govern AI-based technologies and solutions.

In This Issue

  • AI in the News
  • AI Research Highlights
  • Vendor Spotlight
  • Upcoming Events & Resources

AI IN THE NEWS

Meta to stop fact-checking content, moving to Community Notes

Read the Meta announcement regarding the end of fact-checking

Read Meta’s Hateful Conduct Policy updates

On January 7, 2025, Meta announced that it will stop hiring third parties to fact-check content and adopt a Community Notes model. Facebook introduced fact-checking in 2016 to respond to complaints about fake news, especially with respect to the federal election that year. In addition, on January 8, Meta updated its Hateful Conduct policies and removed several policies, including one that prevents people from referring to “women as household objects or property.” Read the January 7 version of the policy to see what’s been removed.

ANALYST ANALYSIS

Delegating the moderation of content to the user community is a poor substitute for third-party fact-checking. CEO Mark Zuckerberg acknowledges that this change is in response to the new political environment and that it is likely that the amount of fake news will increase. He justifies this action because he feels that fact-checking is too complex, prone to errors, and inhibits free speech. For people relying on social media for their news, this latest action is disappointing and demonstrates a lack of leadership. To be fair, Meta has never done a good job of moderating its content, typically in react mode and focusing on leveraging AI to maximize engagement, not safety. Responsible AI initiatives have also been ineffective and always secondary to AI initiatives to grow revenue.

OpenAI announces o3 and o3 mini

OpenAI o3 (video) announcement

On December 20, 2024, OpenAI announced its most advanced frontier model. This follows the availability of the o1 full release on December 5, 2024. As the successor to the o1 model, o3 continues the evolution of the AI model by delivering improved reasoning, coding, mathematics, and complex problem-solving.

ANALYST ANALYSIS

Some of the key innovations introduced into the architecture of the o3 AI model include:

  • Program Synthesis – o3 can dynamically create code to perform a given task or solve a given problem.
  • Simulated Reasoning – o3 pauses and reflects on possible processes before generating a response, enabling the model to address more complex multistep tasks.
  • Evaluator Model – o3 can generate multiple solution paths during inference, leveraging its integrated evaluator model to determine the most promising option.

o3 is expected to be made available to the public later this year. Other vendors, such as Google and Anthropic, are also advancing their models’ reasoning capabilities using similar techniques, such as leveraging chain of thought approaches for reasoning.


AI IN THE NEWS

California wildfires spark numerous disinformation campaigns

Read the some of the numerous conspiracy theories surrounding the wildfires

On January 7, 2025, one of the deadliest wildfires started in Los Angeles, California. The wildfires may become the most catastrophic in history, with property damage projections up to several billions of dollars. Unfortunately, even as they continue to fight the wildfires, authorities have had to spend time and resources to fight misinformation and disinformation. Governor Gavin Newsom’s website is maintaining a list of the various false claims and the facts: California Fire Facts.

ANALYST ANALYSIS

The World Economic Forum’s Global Risks Report 2024 reported that misinformation and disinformation will be the greatest short-term risk the world will face over the next two years.

Some of the conspiracy theories about the wildfires include:

  • They are caused by California’s mismanagement of forest lands.
  • The water reservoirs in California have been dry for 15-20 years.
  • Firefighters are using women's purses to fight fires.
  • They were caused by a “direct energy weapon” attack.

And there are many more. To avoid being influenced by disinformation campaigns, seek out trusted sources of news and scrutinize the accuracy and validity of all information. If you are getting news from a social networking platform, make sure it is one that performs fact-checking.

White House executive order directs agencies to build safe, secure, and sustainable AI data centers to meet future demands

Read the Executive Order on Advancing U.S. Leadership in Artificial Intelligence Infrastructure

On January 14, 2025, during the final days of the Biden administration, the president delivered an executive order directing certain agencies to enable, accelerate, and build the next generation of AI data centers.

The president commented: “The order will speed up how we build the next generation of AI infrastructure right here in America.”

ANALYST ANALYSIS

The goals of the executive order are to enhance economic competitiveness, national security, AI safety, and clean energy. Some of the major directives of the executive order include:

  • Host gigawatt-scale AI data centers (DOD, DOE).
  • Deploy new clean energy generation to support AI infrastructure.
  • Accelerate transmission development around federal sites.
  • Facilitate interconnection of AI infrastructure to the electric grid.
  • Ensure low electricity prices for consumers.
  • Advance allies’ and partners’ development of AI infrastructure.

Apple settles in Siri lawsuit but denies eavesdropping

Read Apple to pay $95 million to settle Siri privacy lawsuit article

On January 2, 2025, Apple announced that it would pay $95 million to settle a class action lawsuit for privacy breaches. The lawsuit claims that Siri, its voice-activated assistant, violated users’ privacy.

Apple users routinely complained that their private conversations were recorded after they unintentionally activated Siri and then shared with third parties such as advertisers.

ANALYST ANALYSIS

The lawsuit spans the past 10 years (September 17, 2014 to December 31, 2024) and focuses on the following devices: iPhone, iPad, Apple Watch, MacBook, iMac, HomePod, iPod Touch, and Apple TV. Payouts by Apple could amount to $20 per Apple device.

Apple denies any wrongdoing, and the settlement still needs to be approved by the presiding judge. The $95 million represents a very small fraction of Apple’s annual profits, and typically in such cases only 3%-5% of eligible consumers file an official claim.


AI RESEARCH HIGHLIGHTS

Fighting Disinformation

Disinformation Claim: The Hollywood sign is/was on fire.

FACT: It was not on fire. Below are a small sample of AI-generated deepfake images created and distributed across various social media platforms

Hollywood sign surrounded by flames Hollywood sign surrounded by flames

Hollywood sign surrounded by flames Hollywood sign surrounded by flames

Source: AI and the Los Angeles Wildfires: Fighting Disinformation
Info-Tech Research Group, 2025

Coming Soon!

Introducing Info-Tech’s Responsible AI Risk Management Framework (RMF)

NIST AI RMF

EU AI Act

OECD AI Principles

Info-Tech’s Responsible AI RMF

Scope

Guidance for organizations focused on practical risk management across the AI lifecycle Law focused on protecting EU citizens and fundamental rights Ethical guidelines with a global POV emphasizing long-term societal impacts of AI Guidance focused on operationalizing responsible AI across the AI lifecycle

Regulatory Nature

Non-regulatory, voluntary guidance Regulatory framework with legal implications Non-regulatory, voluntary guidelines Non-regulatory, voluntary guidance

AI Principles

Developed in the US but applicable globally Focused on the EU but with potential global impact 5 core human-centric principles 6 foundational responsible AI principles

Risk Categories

Flexible framework for risk assessment without explicit categorization Explicit risk categorization (unacceptable, high, limited, minimal) Risk-based approach to AI governance Flexible framework for risk assessment without explicit categorization

Adaptability

Designed to be adaptable to evolving technologies Provides a more fixed structure but includes mechanisms for updating Intended to evolve with the evolution of AI Designed to be adaptable to evolving technologies

Implementation Approach

Structured but adaptable process Prescribes specific requirements based on risk level Human-centric approach with emphasis on transparency, accountability, and collaboration Structured but adaptable process


Vendor Spotlight

AI Marketplace

How AI is used as an assistant for software developers. A thought leader interview with Amazon Q developer.

Amazon Q Developer: How AI Is Used as an Assistant for Software Developers

How AI is used in workforce management. A thought leader interview with Dayforce.

Dayforce: How AI Is Used in Workforce Management

How AI can amplify data quality and governance within enterprises. A thought leader interview with PiLog.

PiLog: How AI Can Amplify Data Quality and Governance Within Enterprises

How AI is transforming claims management. A thought leader interview with Five Sigma.

Five Sigma: How AI s Transforming Claims Management

UPCOMING AND RECENT EVENT UPCOMING AND RECENT EVENTS

AI AND DATA ANALYTICS SOLUTIONS – RESOURCES

    • AI Strategy & Discovery
    • AI Selection, Capabilities & Proof of Concept
    • AI Implementation, Integration & Scale
    • AI Governance, Liability & Risk

AI EDITOR-IN-CHIEF

Bill Wong – Info-Tech AI Research Fellow

Visit our IT Critical Response Resource Center
Over 100 analysts waiting to take your call right now: +1 (703) 340 1171