Top Stories Daily Listen Now
RawStory
RawStory

All posts tagged "artificial intelligence"

This Trump goon's bizarre threat sounds like it came from a drunk guy on a barstool

On Friday, Trump barred an American AI developer, Anthropic, from doing further business with the federal government, and barred all contractors from doing business with Anthropic — an extreme punishment typically reserved for adversarial countries.

Anthropic’s crime? Refusal to let the Department of Defense use its AI system, Claude, for surveilling American citizens or in autonomous weaponry that removes humans from decisions to kill.

Defense Secretary Pete Hegseth — the man who group texted attack plans to a reporter, wanted to punish an astronaut for stating the law, then shot party balloons with potent lasers despite FAA warnings that the lasers could blind pilots while they were in the sky with passengers — demanded that Anthropic let him use its AI system without contractual restrictions. When Anthropic said no, Trump blacklisted them.

It’s hard to say what’s more appalling — that the Trump administration is building tools for mass public surveillance like China’s, or that an undisciplined dry drunk like Hegseth has access to lethal toys.

Keeping up with China … in the worst way

Trump has said he wants to keep up with China through “global technological dominance” and the “widespread use of AI.” China’s authoritarian government uses one of the most advanced public surveillance systems in the world, collecting extensive facial recognition, biometric data, and personal profiles from private citizens against their wishes.

China captures these data from citizens’ faces, conversations, social media posts, phones and other devices while people stand at crosswalks, ride the bus, and go to the store, then feeds the data into an AI database used for oppression: for law enforcement, “monitoring social behavior,” and controlling access to services.

China’s system is similar to what Trump oligarch-supporting Peter Thiel’s Palantir is building, namely, a high-level data integration platform that will enable U.S. law enforcement, ICE, the IRS, DHS, DOJ, the military, and any other rogue agency Trump wants to weaponize to collect facial recognition, license plate readers, and other biometric data for mass surveillance.

Poor Pete, nobody believes him

There were clauses in Anthropic’s contract with the DOD that prevented Claude from being used for either mass surveillance of Americans or autonomous weaponry. While Anthropic had integrated Claude into some classified military networks, that $200 million contract expressly prohibited using it for mass surveillance of Americans as well as autonomous weaponry, “killer robots” that can identify, select, and kill targets without a human in the decision-making loop.

These were the contractual restrictions Hegseth’s DOD demanded be removed. But Anthropic wasn’t having it.

Just before Trump blacklisted them, Anthropic’s CEO, Dario Amodei said the company could not, “in good conscience” agree to the Pentagon’s request. Amodei has expressed concern that Claude could be used for mass surveillance by automatically assembling “scattered, individually innocuous data into a comprehensive picture of any person's life,” which seems to be exactly what Trump is trying to do.

In a series of angry social media posts, Undersecretary of Defense Emil Michael accused Anthropic of “lying” about using Claude for mass surveillance because the Dept. of Defense “doesn’t do mass surveillance as that is already illegal.”

Apparently the DOD does do comedy, because the suggestion that this regime will follow the law is a joke.

Forget about the hundreds of court orders Trump has already violated. How many people have been murdered off the coast of Venezuela with zero legal justification? Claiming without evidence that we’re in an "armed conflict" with "narco-terrorists" is not a legal justification; it’s a dictator’s “shoot now, ask questions never” strategy for breaking the law.

What can the AI do?

Most Americans are blissfully unaware of how the emerging AI landscape could change their lives, and not for the better. Since I’m no AI expert, I asked Google AI to explain in simple terms how Anthropic’s Claude, if left to Hegseth’s command, could be used to spy on Americans. Here’s how AI described Claude’s functional capacity, verbatim:

  • Mass Data Synthesis (Sorting Huge Amounts of Info): Imagine a super-fast robot reading billions of text messages, emails, and internet posts all at once. It looks for "moods" (like who is angry or unhappy) and makes a map of where those people live.
  • Intelligence Dossiers (Digital Secret Files): Using smart computer programs to read thousands of pages of documents about one person instantly. It acts like a digital detective, putting together a secret file on someone's whole life.
  • Automated Tracking (Digital Footprints): Looking at where people drive, what websites they visit, and who they talk to. This combines records to draw a map of where someone goes, like cameras on streets tracking cars.
  • Law Enforcement Support (Police Tech Tools): Companies like Palantir create software for the police. This software combines information from cameras, bank records, and phone calls to track suspects and help police find them quickly.

The dispute has put Silicon valley on edge. If Trump and Hegseth can change the terms of AI contracts after the fact, why sign contracts at all?

The regime’s dishonesty isn’t helping. Before Trump blacklisted Anthropic, Pentagon officials said they had “no interest” in using the illegal surveillance tools outlined above, while seeking unfettered access to them. Color me, and anyone with half a brain, skeptical.

  • Sabrina Haake is a columnist and 25+ year federal trial attorney specializing in 1st and 14th A defense. Her Substack, The Haake Take, is free.

Pentagon stand-off with tech firm reaches key moment as admin considers 'nuclear option'

The Pentagon has found itself at odds with a technology firm refusing to give way to demands from Donald Trump's administration.

Pentagon heads made it clear to Anthropic, the artificial intelligence tech firm, that it would need the company to lower its safeguarding measures should it wish to have its tools used by government officials. The tech team has yet to give in to this demand, with chief executive Dario Amodei saying to do so would undermine the defense of the nation.

Department of Defense head Pete Hegseth urged the company to give in to government demands or find their AI no longer in use at the DoD. Amodei replied, "These threats do not change our position: we cannot in good conscience accede to their request.

"Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider."

If the Department of Defense chooses to move to a different supplier, it could be seen as a bold option for the department.

Michael C. Horowitz, a director at the University of Pennsylvania who oversaw AI weapons policy during the Biden administration, says the Pentagon is no longer trusting of Anthropic after the company resisted the DoD's demands.

Horowitz said, "The Pentagon does not trust that Anthropic will be a reliable vendor, and Anthropic worries about misuse of its technology."

Washington Post staffers Ian Duncan, Elizabeth Dwoskin, and Tara Copp suggested the Pentagon could act sooner rather than later should Anthropic fail to meet their demands.

They wrote, "Because Claude is already in use across the Defense Department, exiling Anthropic and switching to a rival could prove costly. Although Defense officials have suggested they could use the Defense Production Act to force the AI company to share its systems, experts are split on whether the law could be applied.

"Doing so would send a chilling message to the AI firms the Pentagon hopes to lean on that they may risk of having their own innovations seized if the government sees something it wants."

Katie Sweeten, a former liaison for the Justice Department to the Pentagon, has said the move would set a worrying standard and could be seen as a point of no return.

She said, "This is a literal nuclear option which I think rightfully companies should be very concerned about."

We have invented a threat so lethal this Trump stooge should not be allowed near it

Which is more important to you? Allowing Pete Hegseth to use artificial intelligence (AI) however he wants, OR preventing AI from doing mass surveillance of Americans and creating lethal weapons without human oversight?

That’s the stark choice posed by the intensifying fight between an AI corporation called Anthropic and Pete Hegseth, Trump’s Secretary of “War.”

AI is dangerous as hell. I view it as one of the four existential crises America now faces — along with climate change, widening inequality, and the destruction of our democracy.

To be sure, AI is capable of changing human life for the better. But if unregulated, it could be a destructive nightmare — giving government the power to know everything about us and suppress all dissent, distorting news and media to the point where no one can distinguish between lies and truth, and threatening human beings with bots that could decide we’re unnecessary obstacles to their taking over the earth.

Now is the time we should be putting guardrails in place. But two forces are making this difficult if not impossible.

The first is corporate greed, which is why OpenAI, Elon Musk’s xAI, and Google have jettisoned all precautions. Several AI researchers have left AI companies in recent weeks, warning that safety and other considerations are being pushed aside as their corporations raise billions of dollars and in preparation for initial public offerings that will make their executives hugely wealthy.

The second is the Trump regime, which doesn’t wants any restrictions on AI — including state government’s. That’s largely because the AI industry has become a powerful force in Washington, throwing money at politicians who’ll do its bidding (including Trump) and against politicians who want guardrails. And because so many Trump officials are corrupt, with their own financial stakes in AI.

Anthropic has been one of the most safety-conscious of all AI companies. It was founded as an AI safety research lab in 2021 after its CEO Dario Amodei and other co-founders left OpenAI, concerned that OpenAI’s ChatGPT wasn’t focused enough on safety.

Amodei has argued that A.I. needs strict guardrails to prevent it from potentially wrecking the world. In 2022, he chose not to release an earlier version of Anthropic’s AI software Claude, fearing it would start a dangerous technology race. In a podcast interview in 2023, he said there was a 10 to 25 percent chance that A.I. could destroy humanity.

In January, Amodei argued in an essay that “using A.I. for domestic mass surveillance and mass propaganda” was “entirely illegitimate,” and that A.I.-automated lethal weapons could greatly increase the risks “of democratic governments turning them against their own people to seize power.” Internally, the company has strict guidelines barring its technology from being used to facilitate violence.

Over the past year Anthropic has battled the Trump regime by pushing for state and federal AI guardrails.

In recent weeks, Hegseth and Amodei have been fighting over the Pentagon’s use of Anthropic’s AI, called Claude. Amodei has stuck to his demands: no surveillance of Americans, and no lethal autonomous weapons lacking human control.

The fight started when Palantir helped the Pentagon capture Venezuelan president Nicolás Maduro. Palantir is a Pentagon contractor that uses Anthropic’s Claude. (Palantir, co-founded by far-right billionaire Peter Thiel and now headed by Alex Karp, is my candidate for the worst corporation in America because it allows governments, militaries, and law enforcement agencies to quickly process and analyze massive amounts of your personal data.)

When top executives at Anthropic asked executives at Palantir if Claude had been used in the Maduro operation, the Palantir execs became alarmed that Anthropic might not be a reliable partner in future Pentagon operations. They contacted the Pentagon and Hegseth.

Last Tuesday, Hegseth issued Anthropic an ultimatum: It must allow the Pentagon to use its AI for any purpose or the Trump regime will invoke the Defense Production Act — forcing Anthropic to let the Pentagon to use Claude while also putting all Anthropic’s government contracts at risk.

The Pentagon already has agreements with Musk’s xAI to use its AI Grok, and is closing in on an agreement with Google to use its own AI model, Gemini. But Anthropic’s Claude is considered a superior product, producing more accurate information.

What’s at stake here? Everything.

Pentagon officials have said that they have the right to use AI however they wish, as long as they use it lawfully.

But because AI has so much political power, Congress and the Trump regime won’t enact laws to prevent it from doing horrendous things. That in effect leaves the responsibility to private AI companies such as Anthropic. Anthropic says it wants to support the government but must ensure that its AI is used in line with what it can “responsibly do.”

Hegseth and the Trump regime have given Anthropic until this Friday at 5 pm to consent to letting the Pentagon use its AI however it wishes or it will simply take it.

Friends, this isn’t just a dispute between two people — Hegseth and Amodei. Nor is it a fight between the Pentagon and a single corporation. The issue goes way beyond this particular controversy. I don’t want to be overly alarmist about it, but the outcome could affect the future of humanity.

What can you do? Call your senators and representatives now, today, and tell them you don’t want the Defense Department to take Anthropic’s AI technology, and you do want them to enact strict controls on the future uses of AI.

Visit www.congress.gov/members/find-your-member and type your address into the search box. A list of your representatives and their contact information will appear. Or you can call the Capitol switchboard directly at 202-224-3121 to be connected to your members’ office.

As I’ve said before, congressional staffers log every single call that comes into their office in a database that informs the member of the issues their constituents are engaged with, and they use this data to inform their decisions. Staffers answering the phones are trained to talk with constituents, and they do it all day. They won’t be debating you about your position, and are likely to be primarily listening and taking notes.

Please. Today.

  • Robert Reich is an emeritus professor of public policy at Berkeley and former secretary of labor. His writings can be found at https://robertreich.substack.com/. His new memoir, Coming Up Short, can be found wherever you buy books. You can also support local bookstores nationally by ordering the book at bookshop.org

An economic tidal wave is heading straight for America

May I be candid with you about the U.S. economy? It’s growing nicely, and the stock market has soared. But on what really counts to most Americans — jobs and wages — it’s sh---y.

The Bureau of Labor Statistics reported this morning that employers added 130,000 jobs in January. That’s not bad until you see that health care accounted for more than half of them. Construction gained 33,000 jobs. Most other sectors were flat.

I would have expected far more job growth, considering the paucity of new jobs last year.

Artificial intelligence isn’t the culprit directly. I think employers have been cautious about hiring given all the uncertainty in the political economy, starting with Trump’s wildly-vacillating tariffs.

But many employers are assessing AI’s likely impact on their businesses, and may be holding back on some of their hiring in anticipation. After all, payrolls comprise two-thirds of a typical business’s costs.

Promoters of AI are working overtime to spin it as benefiting average people. Anyone who watched the Super Bowl ads for AI last Sunday saw how AI is being spun as a wondrous boon to humankind.

Consider the breathless front-page headline in a recent Washington Post: “These companies say AI is key to their four-day workweeks.” The subhead was as euphoric: “Some companies are giving workers back more time as artificial intelligence takes over more tasks.” As the Post explained:

“More companies may move toward a shortened workweek, several executives and researchers predict, as workers, especially those in younger generations, continue to push for better work-life balance.”

Hurray! There’s utopia at the end of the AI rainbow! A better work-life balance!

Similar articles are appearing in Fortune and the New York Times. The AI spin brigade is in full force.

Business leaders are rhapsodizing about how AI will “free” their employees to take more time off. Zoom’s Eric Yuan told the Times that “AI can make all of our lives better, why do we need to work for five days a week? Every company will support three days, four days a week. I think this ultimately frees up everyone’s time.”

Jamie Dimon, CEO of JPMorgan Chase, says advancing technology could push the workweek down to just three-and-a-half days. Microsoft cofounder Bill Gates openly wonders whether a two-day workweek could be the future.

Elon Musk pushes the idea to the extreme (as he does everything else): “In less than 20 years — but maybe even as little as 10 or 15 years — the advancements in AI and robotics will bring us to the point where working is optional.”

Even better: “There will be no poverty in the future and so no need to save money,” says Musk. “There will be universal high income.”

All of this is pure rubbish.

Even if AI produces big productivity gains — which is still an open question (an MIT study last year found that “despite $30–40 billion in enterprise investment into GenAI, 95 percent of organizations are getting zero return”) — it’s far from clear that most workers will see much, if any, of AI’s benefits.

If productivity rises, as it’s supposed to do when the workplace becomes immersed in AI, each worker will generate more value, by definition. And with more value, supposedly we’re all better off.

But worker productivity has been rising for years, yet the median wage has barely risen when adjusted for inflation.

Here’s the truth: The four-day workweek will most likely come with four days’ worth of pay. The three-day workweek, with three days’ worth. And so on.

So, as AI takes over their current work, most workers will probably get poorer or have to take additional jobs to maintain their current pay.

In his famous 1930 essay “Economic Possibilities for Our Grandchildren,” the great British economist John Maynard Keynes predicted that in a century, “the discovery of means of economizing the use of labour” would outpace our ability to “find new uses for labor.” In other words, less work.

Yet Keynes was sure that by 2030 the “standard of life” in Europe and the United States would be so improved by technology that no one would worry about making money. Productivity gains would create an age of abundance.

In fact, by 2030, he predicted, our biggest problem would be how to use all our leisure time:

“For the first time since his creation man will be faced with his real, his permanent problem — how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

We’re still four years away from Keynes’s prediction, but at the rate we’re going, it seems wildly wrong.

Rather than creating an age of abundance in which most people no longer have to worry about money, new technologies have contributed to a two-tiered society comprising a relatively few with extraordinary wealth and a vast number of people barely making it.

AI is likely to further widen inequality. It already is. This week, as layoffs climbed and job openings plunged — especially for professionals exposed to AI — the Dow Jones Industrial Average closed above 50000 for the first time.

Imagine a small box — call it an iEverything — capable of producing for you everything you could possibly desire. It’s a modern-day Aladdin’s lamp. You simply tell it what you want and — presto! —the item or service suddenly appears.

Sounds wonderful until you realize that no one will be able to buy the iEverything because no one will have any means of earning money, since the iEverything will do everything.

This is obviously fanciful, but the dilemma is very real. Productivity gains are great, but the too-little-discussed question is how they’ll be distributed.

The distribution issue can’t be ignored. When more can be done by fewer people, who gets paid what?

It comes down to who has the power.

For most of the last 40 years, the jobs and wages of blue-collar Americans were eroded by globalization and computer software, and most of the benefits from productivity gains went to the richest 10 percent.

AI is now putting the jobs of millions of white-collar Americans on the line. If nothing is done, we’re likely to see white-collar jobs suffer the same erosion — with most of the benefits from the productivity gains going to the richest 0.1 percent.

Unless Americans — white collar, blue collar, pink collar — have the power to demand a share in the productivity gains, profits will go to an ever-smaller circle of owners — leaving the rest of us with less money to buy what can be produced, which is a formula for a fragile economy and an even worse politics.

If the five-day workweek with five days of pay shrinks to four days with four days of pay, and then to three, and to two, and perhaps one, AI will supplant most people’s work and drive down our take-home pay. We may see a dazzling array of products and services spawned by AI, but few of us will be able to buy them.

But this isn’t necessarily our fate. Assuming AI delivers big productivity gains, most Americans could receive the benefits of those gains if most Americans have the bargaining power to get them.

Could labor unions ever be revived to the point that they gave most Americans the bargaining power they need? (I’ll deal with that question shortly.)

Will at least one of our two dominant political parties enact laws that distribute those gains more fairly? (Think a Universal Basic Income, for example, or wealth taxes financing child care, elder care, and universal health care.)

These are not impossible outcomes. After all, as I’ve argued, the future owners of AI have a financial interest in enabling most people to buy the dazzling array of products and services AI spawns.

In the meantime, though, don’t fall for the breathless rubbish about AI allowing employers to “free up” employees’ time.

AI may deliver wondrous benefits. The real question is whether AI’s productivity gains (assuming AI delivers them) are widely shared.

  • Robert Reich is an emeritus professor of public policy at Berkeley and former secretary of labor. His writings can be found at https://robertreich.substack.com/. His new memoir, Coming Up Short, can be found wherever you buy books. You can also support local bookstores nationally by ordering the book at bookshop.org

Trump ally's 2028 dreams in chaos as MAGA infights on thorny issue: 'He can't say a word'

Republicans have reached a significant challenge ahead of the 2028 elections — infighting over the party's stance on artificial intelligence — a clash that has put potential presidential hopeful and Vice President JD Vance in the middle of the GOP's ranging ideology on the burgeoning technology.

Among MAGA, it's unclear where the party will land as far as support or disdain for AI and what that could mean for 2028 presidential contenders within the Republican party, according to a Politico Magazine report. Those conflicting attitudes have resulted in questions over what will happen post-President Donald Trump, who has generally opposed any regulation on tech companies and AI technology.

Sen. Josh Hawley (R-MO), who has generally been an ally of Trump but has broken with the president in the past, has raised his own concerns about the tech industry and could also be a potential presidential hopeful.

“The AI revolution is proceeding on transhumanist lines. It is working against the working man, his liberty and his worth,” Hawley said during a speech at the National Conservatism Conference in September. “It is operating to install a rich and powerful elite. It is undermining our most cherished ideals. And insofar as that keeps on, AI works to undermine America.”

With Vance and Secretary of State Marco Rubio expected to be potential front-runners to lead the Republican party, following a soft endorsement from Trump in 2025, it's unclear which direction the GOP will take on its views or critiques of the tech industry.

"That’s because AI is poised to strike directly at the contradictions embedded within the new coalition that Trump has built: It will pit the new blue-collar members of the GOP base against the business-aligned sector that Trump has increasingly won over in his second term. It will pit family-values and religious conservatives against the newly emboldened tech wing," Politico reported.

"And it is a policy issue that could prove particularly problematic for the 2028 contenders who are closest to Trump, because the Trump White House is pursuing an agenda on AI that is out of step with what many Trump-aligned voters and influencers want — especially the more populist elements that are increasingly prominent in the GOP’s ranks," according to Politico.

This could create a conflict for Vance, who has aligned his policies with the Trump administration.

"Vance is handcuffed because he can’t say a word," a former Trump administration, who spoke under the condition of anonymity to openly discuss the dynamics among the White House insiders, official told the outlet. “Hawley can spend the next three years railing against AI.”

GOP navigating 'minefield' issue that could split party ahead of midterms

Republican Party insiders say the split between representatives on whether to pursue further reliance on artificial intelligence is a challenge.

An unnamed insider working with groups advocating for state regulation of AI tools has said the GOP landscape is a "minefield" so close to the midterms. Donald Trump's administration has made it clear they want "global dominance" and released an action plan in July 2025 titled "Winning the Race".

A description for the Trump administration's plan for AI reads, "The United States is in a race to achieve global dominance in artificial intelligence. Whoever has the largest AI ecosystem will set the global standards and reap broad economic and security benefits.

"Under President Trump, our Nation will win, ushering in a new Golden Age of innovation, human flourishing, and technological achievement for the American people. America’s AI Action Plan has three policy pillars – Accelerating Innovation, Building AI Infrastructure, and Leading International Diplomacy and Security."

Despite the plan, some GOP members believe the reliance on AI is causing more problems than it is worth. The Republican rep who called the plans a "minefield" when speaking to CNN went on to suggest the president is "aware" of how splintered the party is.

He said, "We represent working people, and if we’re not sensitive towards the impact on jobs, no question, there’s going to be political cost to that. If we’re not sensitive to protecting children, no question."

Former Arizona Sen. Kyrsten Sinema believes the public are fearful of technological advances because they do not understand what it means for their lives.

Sinema, who founded the AI Infrastructure Coalition this year to advocate in favor of AI, said, "Lots and lots of Americans are scared of AI and don’t understand it.

"AI companies haven’t done the most excellent job of helping people see AI in their daily lives. That story needs to be told.”

Not all Republican party members are as keen on AI as the Trump administration. One unnamed insider who advises tech clients on political strategy says the growing backlash could worsen the industry.

They said there is "potential in the long run for Trump to see political headwinds and walk away from AI.

"It should be a genuine concern of the industry. I think that’s why there’s so much discussion from proponents about the national security risks of losing the AI race to China. They’re trying to box Trump into a corner."

Big Tech and AI lobbying 'skyrockets' under Trump — and experts are sounding the alarm

From Alphabet to X, eight of the largest tech giants spent a record of $71 million combined on U.S. political lobbying in 2025, according to a new report from Issue One, a bipartisan nonprofit working to reduce the influence of money in politics.

“Big Tech is using every tool in the toolbox to gain access and influence in Trump's Washington,” said Michael Beckel, senior research director at Issue One and report co-author.

It’s the latest example of “pay-to-play politics” under President Donald Trump, the report says — highlighting how tech, artificial intelligence (AI) and social media companies spent nearly $330,000 each day Congress was in session in 2025, and came away with a series of wins around industry regulations.

For one, this week the U.S. and China signed off on an agreement to sell social media company TikTok’s U.S. business to investors including Oracle, run by billionaire Trump backer Larry Ellison.

ByteDance, TikTok’s parent company, spent $8.3 million on lobbying in 2025, after spending a record $10.4 million in 2024, according to the report.

“We're talking massive political contributions, massive lobbying expenditures, and these new filings show that there's been a huge boom for many of the highest profile tech players in Washington, making sure that they've got friends and ways to influence people in Washington,” Beckel said.

Meta, the parent company of Facebook and Instagram, spent the most among the tech giants on federal lobbying in 2025, at $26.29 million — up 8 percent from the previous year.

‘Delivering what AI wants’

AI companies “skyrocketed” lobbying spending in 2025, Beckel said.

That’s because AI companies stand to win “substantially” by such expenditure as they look to expand data centers and get ahead of competitors, said Jonathan Ernest, an assistant professor of economics at Case Western Reserve University in Cleveland.

“They're finding that that lobbying can be reasonably successful in persuading the administration to potentially craft laws that are maybe more favorable to them in certain ways,” Ernest said.

“They've found that these additional dollars being spent on lobbying are now more worthwhile than they were before because the likelihood of them being successful goes up, and the potential gains have increased as well.”

Nvidia, an AI company, increased lobbying expenditures eightfold in 2025, spending nearly $5 million.

OpenAI, the company behind ChatGPT, spent just shy of $3 million, approximately 70 percent up on 2024.

“The overwhelming pattern that we've seen from the Trump administration is putting certain industries and certain companies at the forefront of how they're making policy decisions,” Beckel said.

In December, Trump signed an executive order limiting state AI regulation, which Beckel said was “basically delivering to the AI industry what it wants.”

“This seems pretty clear that the Trump White House is playing favorites, and the industry leaders who are able to make their voices heard in Washington through political contributions and lobbying expenditures have a prime seat at the table right now,” Beckel said.

‘Influentially large’

The report examined the latest lobbying disclosures from Alphabet, Microsoft, Snap, X, ByteDance, Meta, Nvidia and OpenAI.

Alphabet spent $16.62 million in 2025, second-most of the Big Tech players and up 12 percent from the previous year. Microsoft spent $10.1 million — just 2 percent less than its 2024 spending, according to the report.

All the companies either declined to comment or did not respond.

Issue One said curtailing the influence of Big Tech money on politics was supported by both Democrats and Republicans. The nonprofit advocates for "common sense reforms to the tech sector to help ensure that Congress holds Big Tech accountable," Beckel said.

For tech giants with billions in revenue, lobbying expenditures don’t represent “a huge chunk of their operating budget, but it's still a very influentially large amount of money,” Ernest said.

But, that doesn’t mean tech giants will continue to spend on lobbying at a growing rate.

“It will depend on how much it feels like it's needed for them,” Ernest said.

“If they feel like they already have an administration that's reasonably lax in terms of enforcement of regulatory matters or reasonably supportive of companies that even may be amassing some sort of advantage by growing very large and becoming more monopolistic, then they'll find it less useful to continue to put money towards those ends.”

Another day, another horror, another grim step in Trump's war on humanity itself

It seems appropriate right now to try to clarify one of the most basic questions America is (or should be) struggling with: What does it mean to be a human being?

The confusion is mounting.

Three illustrations:

1. Corporations

Corporations are not human beings. That should be self-evident.

But in 2010, the Supreme Court ruled (in its Citizens United case) that corporations are the equivalent of “people” under the First Amendment to the Constitution, with rights to free speech.

This ruling has made it nearly impossible for the government to restrict the flow of money from giant corporations into politics. As a result, the political voices — and First Amendment rights — of most real human beings in America are being effectively drowned out.

But in coming years, states will have an opportunity to circumvent Citizens United by redefining what a “corporation” is in the first place.

Absent state charters that empower them to become “corporations,” business organizations are nothing more than collections of contracts — between investors and managers, managers and employees, and consumers and sellers.

In the 1819 Supreme Court case Trustees of Dartmouth College v. Woodward, Chief Justice John Marshall established that:

“A corporation is an artificial being, invisible, intangible [that] possesses only those properties which the charter of its creation confers upon it …. The objects for which a corporation is created are universally such as the government wishes to promote.”

Montana is now readying a proposition for its 2026 ballot that would empower organizations that sought to be corporations there to do many things — except to fund elections. (I’ve written more on this, here.)

2. Artificial Intelligence

AI is not human, although it’s becoming increasingly difficult for many real people to tell the difference between “artificial general intelligence” and a real person.

As a result, some real people have lost touch with reality — becoming emotionally attached to AI chat boxes, or fooled into believing that AI “deepfake” videos are real, or attributing higher credibility to AI than is justified — sometimes with tragic results.

In his typically ass-backward pro-billionaire way, Donald Trump has issued an executive order aimed at stopping states from regulating AI. But some governors — most interestingly, Florida’s Ron DeSantis — have decided to establish guardrails nonetheless.

DeSantis is calling on Florida’s lawmakers to require tech companies to notify consumers when they are interacting with AI, not to use AI for therapy or mental health counseling, and to give parents more controls over how their children use AI. DeSantis also wants to restrict the growth of AI data centers by eliminating state subsidies to tech companies for such centers and preventing such facilities from drying up local water resources.

In a recent speech, DeSantis said:

“We as individual human beings are the ones that were endowed by God with certain inalienable rights. That’s what our country was founded upon — they did not endow machines or these computers for this.”

I never thought I’d be agreeing with Ron DeSantis, but on this one he’s right.

Corporations are legal fictions. Human AI is a technological fiction. Neither has human rights. Both should be regulated for the benefit of human beings.

3. Non-Americans and suspected enemies

The third illustration of our current confusion over what is a human being is endemic in Trump’s policies toward immigrants and many inhabitants of other nations, now especially in and around Venezuela.

On Wednesday, a federal agent shot and killed a 37-year-old woman during an immigration raid in Minneapolis. Despite what Trump and Kristi Noem say, a video at the scene makes clear that the shooting was not in self-defense.

Minnesota Gov. Tim Walz said: “We have been warning for weeks that the Trump administration’s dangerous, sensationalized operations are a threat to our public safety,” adding that it cost a person her life.

ICE agents are arresting and detaining people on mere suspicion that they are not in the United States legally — sometimes deporting them to foreign nations where they’re brutalized — without any independent findings of fact (a minimum of “due process”).

Meanwhile, Trump and Stephen Miller, his assistant for bigotry and nativism, are busy dehumanizing immigrants. For example, Trump describes Somalian-Americans as “garbage.”

Last weekend, the U.S. killed an estimated 75 people in its attack on Venezuela, as it abducted President Nicolás Maduro and his wife. The U.S. has been bombing and killing sailors on small vessels in the Caribbean and off the coast of Venezuela on the suspicion they’re smuggling drugs into the United States — on the vague pretext that they’re “enemy combatants,” although Congress has not declared war.

Trump’s justification for all such killings has shifted from preventing drug smuggling to “regaining control” over oil reserves that Venezuela nationalized 50 years ago.

In all these cases, the Trump regime is violating fundamental universal human rights considered essential to human dignity.

Corporations and AI are not human beings, but people who come to the United States seeking asylum indubitably are human. So too are undocumented people who arrived in the United States when they were small children and have been here ever since. As are our neighbors and friends who, although undocumented, are valued members of our communities.

As are the Venezuelans who have been murdered by the Trump regime.

So, what does it mean to be a human being?

It means the right to be protected from the big-money depredations of giant corporations, and from the emotional lure of AI disguised as a human.

And it means to be treated respectfully — as a member of the human race possessing inherent, inalienable rights.

These are moral imperatives. But America is doing exactly the reverse.

  • Robert Reich is a emeritus professor of public policy at Berkeley and former secretary of labor. His writings can be found at https://robertreich.substack.com/.
  • Robert Reich's new memoir, Coming Up Short, can be found wherever you buy books. You can also support local bookstores nationally by ordering the book at bookshop.org

What next at the Fed? Will the AI bubble burst? For Trump, economic questions mount

The U.S. economy heads into 2026 in an unusual place: Inflation is down from its peak in mid-2022, growth has held up better than many expected, and yet American households say that things still feel shaky. Uncertainty is the watchword, especially with a major Supreme Court ruling on tariffs on the horizon.

To find out what’s coming next, The Conversation checked in with finance professors Brian Blank (Mississippi State) and Brandy Hadley (Appalachian State), who study how businesses make decisions amid uncertainty. Their forecasts for 2025 and 2024 held up notably well. Here’s what they’re expecting from 2026 — and what that could mean for households, workers, investors and the Federal Reserve:

What’s next for the Federal Reserve?

The Fed closed out 2025 by slashing its benchmark interest rate by a quarter of a percentage point — the third cut in a year. The move reopened a familiar debate: Is the Fed’s easing cycle coming to an end, or does the cooling labor market signal a long-anticipated recession on the horizon?

While unemployment remains relatively low by historical standards, it has crept up modestly since 2023, and entry-level workers are starting to feel more pressure. What’s more, history reminds us that when unemployment rises, it can do so quickly. So economists are continuing to watch closely for signs of trouble.

So far, the broader labor market offers little evidence of widespread worsening, and the most recent employment report may even be more favorable than the top-line numbers made it appear. Layoffs remain low relative to the size of the workforce — though this isn’t uncommon — and more importantly, wage growth continues to hold up. That’s in spite of the economy adding fewer jobs than most periods outside of recessions.

Gross domestic product has been surprisingly resilient; it’s expected to continue growing faster than the pre-pandemic norm and on par with recent years. That said, the recent shutdown has prevented the government from collecting important economic data that Federal Reserve policymakers use to make their decisions. Does that raise the risk of a policy miscue and potential downturn? Probably. Still, we aren’t concerned yet.

And we aren’t alone, with many economists noting that low unemployment is more important than slow job growth. Other economists continue to signal caution without alarm.

Consumers, the largest driver of economic growth, continue spendingperhaps unsustainably — with strength becoming increasingly uneven. Delinquency rates — the share of borrowers who are behind on required loan payments in housing, autos and elsewherehave risen from historic lows, while savings balances have declined from unusually high post-pandemic levels. A more pronounced K-shaped pattern in household financial health has emerged, with older higher-income households benefiting from labor markets and already seeming past the worst financial hardship.

Still, other households are stretched, even as gas prices fall. This contributes to a continuing “vibecession,” a term popularized by Kyla Scanlon to describe the disconnect between strong aggregate economic data and weaker lived experiences amid economic growth. As lower-income households feel the pinch of tariffs, wealthier households continue to drive consumer spending.

For the Fed, that’s the puzzle: solid top-line numbers, growing pockets of stress and noisier data — all at once. With this unevenness and weakness in some sectors, the next big question is what could tip the balance toward a slowdown or another year of growth. And increasingly, all eyes are on AI.

Is AI a bubble?

The dreaded “B-word” is popping up in AI market coverage more often, and comparisons to everything from the railroad boom to the dot-com era are increasingly common.

Stock prices in some technology firms undoubtedly look expensive as they rise faster than earnings. This may be because markets expect more rate cuts coming from the Fed soon, and it is also why companies are talking more about going public. In some ways, this looks similar to bubbles of the past. At the risk of repeating the four most dangerous words in investing: Is this time different?

Comparisons are always imperfect, so we won’t linger on the differences between this time and two decades ago when the dot-com bubble burst. Let’s instead focus on what we know about bubbles.

Economists often categorize bubbles into two types. Inflection bubbles are driven by genuine technological breakthroughs and ultimately transform the economy, even if they involve excess along the way. Think the internet or transcontinental railroad. Mean-reversion bubbles, by contrast, are fads that inflate and collapse without transforming the underlying industry. Some examples include the subprime mortgage crisis of 2008 and The South Sea Company collapse of 1720.

If AI represents a true technological inflection — and early productivity gains and rapid cost declines suggest it may — then the more important questions center on how this investment is being financed.

Debt is best suited for predictable, cash-generating investments, while equity is more appropriate for highly uncertain innovations. Private credit is riskier still and often signals that traditional financing is unavailable. So we’re watching bond markets and the capital structure of AI investment closely. This is particularly important given the growing reliance on debt financing in some large-scale infrastructure projects, especially at firms like Oracle and CoreWeave, which already seem overextended.

For now, caution, not panic, is warranted. Concentrated bets on single firms with limited revenues remain risky. At the same time, it may be premature to lose sleep over “technology companies” broadly defined or even investments in data centers. Innovation is diffusing across the economy, and these tech firms are all quite different. And, as always, if it helps you sleep better, changing your investments to safer bonds and cash is rarely a risky decision.

A quiet but meaningful shift is also underway beneath the surface. Market gains are beginning to broaden beyond mega-cap technology firms, the largest and most heavily weighted companies in major stock indexes. Financials, consumer discretionary companies and some industrials are benefiting from improving sentiment, cost efficiencies and the prospect of greater policy clarity ahead. Still, policy challenges remain ahead for AI and housing with midterms looming.

Will things ever feel affordable again?

Policymakers, economists and investors have increasingly shifted their focus from “inflation” to “affordability,” with housing remaining one of the largest pressure points for many Americans, particularly first-time buyers.

In some cases, housing costs have doubled as a share of income over the past decade, forcing households to delay purchases, take more risk or even give up on hopes of homeownership entirely. That pressure matters not only for housing itself, but for sentiment and consumption more broadly.

Still, there are early signs of relief: Rents have begun to decline in many markets, especially where new supply is coming online, like in Las Vegas, Atlanta and Austin, Texas. Local conditions such as zoning rules, housing supply, population growth and job markets continue to dominate, but even modest improvements in affordability can meaningfully affect household balance sheets and confidence.

Looking beyond the housing market, inflation has fallen considerably since 2021, but certain types of services, such as insurance, remain sticky. Immigration policy also plays an important role here, and changes to labor supply could influence wage pressures and inflation dynamics going forward.

There are real challenges ahead: high housing costs, uneven consumer health, fiscal pressures amid aging demographics and persistent geopolitical risks.

But there are also meaningful offsets: tentative rent declines, broadening equity market participation, falling AI costs and productivity gains that may help cool inflation without breaking the labor market.

Encouragingly, greater clarity on taxes, tariffs, regulation and monetary policy may arrive in the coming year. When it does, it could help unlock delayed business investment across multiple sectors, an outcome the Federal Reserve itself appears to be anticipating.

If there is one lesson worth emphasizing, it’s this: Uncertainty is always greater than anyone expects. As the oft-quoted baseball sage Yogi Berra memorably put it, “It’s tough to make predictions, especially about the future.”

Still, these forces may converge in a way that keeps the expansion intact long enough for sentiment to catch up with the data. Perhaps 2026 will be even better than 2025, as attention shifts from markets and macroeconomics toward things that money can’t buy.

This surging force is an existential menace — and it's capturing our leaders

“This is the West, sir. When the facts become legend, print the legend.”journalist in the 1962 film, The Man Who Shot Liberty Valance

The top editors at Time (yes, it still exists) looked west to Silicon Valley and decided to print the legend last week when picking their Person of the Year for the tumultuous 12 months of 2025. It seemed all too fitting that its cover hailing “The Architects of AI” was the kind of artistic rip-off that’s a hallmark of artificial intelligence: 1932’s iconic newspaper shot, “Lunch Atop a Skyscraper,” “reimagined” with the billionaires — including Elon Musk and OpenAI’s Sam Altman — and lesser-known engineers behind the rapid growth of their technology in everyday life.

Time‘s writers strived to outdo the hype of AI itself, writing that these architects of artificial intelligence “reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons.”

OK, but it’s a tool that’s clearly going to need a lot more work, or architecting, or whatever it is those folks out on the beam do. That was apparent on the same day as Time‘s celebration when it was reported that Washington Post editors got a little too close to the edge when they decided they were ready to roll out an ambitious scheme for personalized, AI-driven podcasts based on factors like your personal interests or your schedule.

The news site Semafor reported that the many gaffes ranged from minor mistakes in pronunciation to major goofs like inventing quotes — the kind of thing that would get a human journalist fired on the spot.

“Never would I have imagined that the Washington Post would deliberately warp its own journalism and then push these errors out to our audience at scale,” a dismayed, unnamed editor reported.

The same-day contrast between the Tomorrowland swooning over the promise of AI and its glitchy, real-world reality felt like a metaphor for an invention that, as Time wasn’t wrong in reporting, is so rapidly reshaping our world. Warts and all.

Like it or not.

And for most people (myself included), it’s mostly “or not.” The vast majority understands that it’s too late to put this 21st-century genie back in the bottle, and like any new technology there are going to be positives from AI, from performing mundane organizing tasks that free up time for actual work, to researching cures for diseases.

But each new wave of technology — atomic power, the internet, and definitely AI — increasingly threatens more risk than reward. And it’s not just the sci-fi notion of sentient robots taking over the planet, although that is a concern. It’s everyday stuff. Schoolkids not learning to think for themselves. Corporations replacing salaried humans with machines. Sky-high electric bills and a worsening climate crisis because AI runs on data centers with an insatiable need for energy and water

The most recent major Pew Research Center survey of Americans found that 50 percent of us are more concerned than excited about the growing presence of AI, while only 10 percent are more excited than concerned. Drill down and you’ll see that a majority believes AI will worsen humans’ ability to think creatively, and, by a whopping 50-to-5 percent margin, also believes it will worsen our ability to form relationships rather than improve it. These, by the way, are two things that weren’t going well before AI.

So naturally our political leaders are racing to see who can place the tightest curbs on artificial intelligence and thus carry out the will of the peop... ha, you did know this time that I was kidding, didn’t you?

It’s no secret that Donald Trump and his regime were in the tank from Day One for those folks out on Time‘s steel beam, and not just Musk, who — and this feels like it was seven years ago — donated a whopping $144 million to the Republican’s 2024 campaign. Just last week, the president signed an executive order aiming to press the full weight of the federal government, including Justice Department lawsuits and regulatory actions, against any state that dares to regulate AI. He said that’s necessary to ensure US “global AI dominance.”

This is a problem when his constituents clearly want AI to be regulated. But it’s just as big a problem — perhaps bigger — that the opposition party isn’t offering much opposition. Democrats seem just as awed by the billionaire grand poobahs of AI as Trump. Or the editors of Time.

Also last week, New York Democratic Gov. Kathy Hochul — leader of the second-largest blue state, and seeking reelection in 2026 — used her gubernatorial pen to gut the more-stringent AI regulations that were sent to her desk by state lawmakers. Watchdogs said Hochul replaced the hardest-hitting rules with language drafted by lobbyists for Big Tech.

As the American Prospect noted, Hochul’s pro-Silicon Valley maneuvers came after her campaign coffers were boosted by fundraisers held by venture capitalist Ron Conway, who has been seeking a veto, and the industry group Tech:NYC, which wants the bill watered down.

It was a similar story in the biggest blue state, California, where Gov. Gavin Newsom in 2024 vetoed the first effort by state lawmakers to impose tough regulations on AI, and where a second measure did pass but only after substantial input from lobbyists for OpenAI and other tech firms. Silicon Valley billionaires raised $5 million to help Newsom — a 2028 White House front-runner — beat back a 2021 recall.

Like other top Democrats, Pennsylvania Gov. Josh Shapiro favors some light regulation for AI but is generally a booster, insisting the new technology is a “job enhancer, not a job replacer.” He’s all in on the Keystone State building massive data centers, despite their tendency to drive up electric bills and their unpopularity in the communities where they are proposed.

Money talks, democracy walks — an appalling fact of life in 2025 America. In a functioning democracy, we would have at least one political party that would fly the banner of the 53% of us who are wary of unchecked AI, and even take that idea to the next level.

A Harris Poll found that, for the first time, a majority of Americans also see billionaires—many of them fueled by the AI bubble — as a threat to democracy, with 71 percent supporting a wealth tax. Yet few of the Democrats hoping to retake Congress in 2027 are advocating such a levy. This is a dangerous disconnect.

Time magazine got one thing right. Just as its editors understood in 1938 that Adolf Hitler was its Man of the Year because he’d influenced the world more than anyone else, albeit for evil, history will likely look back at 2025 and agree that AI posed an even bigger threat to humanity than Trump’s brand of fascism. The fight to save the American Experiment must be fought on both fronts.

  • Will Bunch is the national columnist for the Philadelphia Inquirer -- with some strong opinions about what's happening in America around social injustice, income inequality and the government.