GRASS VALLEY, Calid. November 21, 2025 – The final Nevada County Community Forum of the year was entitled “AI: How it affects you and our future.” Moderated by retired Superintendent of Schools Terry McAteer, the panelists were Steve Monaghan, former county Chief Information Officer and now VP of AI and Public Safety at Ladris, Attorney Eric Little who is the Outside General Counsel at Ladris and Professor Sasha Sidorkin from California State University, Sacramento.

AI’s potential workplace impacts

“You keep hearing these things like, oh, it’s going to be tremendous job losses. Then I heard last night, oh, there’s going to be job increases. What’s the workplace going to look like, Steve [Monaghan]?” McAteer asked.

Monaghan replied, “We don’t really know yet, but we know it’s going to be profoundly different than it is today. I teach IT leadership classes, I like to put context to it. People are talking about ‘AI, it’s all new.’ AI has been around for 50 years. It’s what’s been out for the last three years, this generative AI that came out with ChatGPT and those technologies that has made it available to the masses, consumerized AI so that it’s available to your average organization in all sorts of products… I have this slide I put up in the beginning of my classes and it’s the Internet in 1994. That’s where we’re at with AI right now… If you look at the last 30 years of what the Internet has done to every aspect of our lives and our society and our commerce and our education and healthcare, that’s the impact that AI is going to have on our organizations. But it’s not going to take 30 years. It’s going to be in five years, 10 years.”

Expert systems for lawyers?

Attorney Little expects great changes in his area of expertise, “Well, so let me talk first about myself and then about what I think I’m seeing in the larger firms. So I have a background in a couple of areas. I have done a lot of transactional work for technology companies. I’ve done some financing and I actually have a background over 30 years ago in building expert systems for lawyers. What I found about AI in my own practice is it, it expands the areas where I feel comfortable practicing maybe more than just at the margin. So that while I had a fairly narrow lane that I normally like to keep in, now I can move more outside of that lane… I guess the practice of law is becoming more competitive. There are now a huge number of well funded legal software companies that will automate a variety of different aspects of your practice and they are selling to everybody from small firms to the very large firms. A lot of the large firms, I believe have millions of documents in their databases and they are training AI on those documents so that their need for young lawyers who work research is declining. They will hire less.”

Mr. Little’s optimism notwithstanding, recent instances of hallucinating AI used in legal proceedings demonstrates the limits of the current iterations of AI and the need for human oversight.

Monaghan provided context to Little’s statement about less hiring for entry-level positions, “The Stanford-ADP, the payroll company, study came out last month and a half ago and what it showed was a 13% decline in new hires in the demographic of 22-year olds to 28-year olds. But it wasn’t an overall decline in hiring. It shifted to an older demographic, the 29-year old to the 40-year old. Companies are not hiring the younger people… What we’re seeing is AI can replace or replace the need for a lot of the younger people, but they’re hiring older, more well established people because you still need judgment. AI has no judgment. AI has no empathy. So you still need the older. You need a little wisdom.”

Are we all executives now?

Sidorkin said, “You know, there is a difference between, like, a manager and an executive. The executive is the one that has a strategy and tells people what to do. So in a way, we’re all turning into executives. But you still need to have an overall idea what needs to be done, what is worth doing, what’s the difference between the good outcome and the crappy outcome, and judge it and make a difference. So those are more advanced skills that we need to teach students. The skills like, where or whether you put in the Oxford comma somewhere is really becoming completely irrelevant. I would argue that it was always irrelevant, but… Or like, can you put together a perfectly grammatical English sentence? I mean, some of us spend years trying to master that skill, and suddenly it becomes completely irrelevant. The robot can do that. Right, so it’s very painful also for people, like, emotionally. There’s this reaction, oh my God, I studied. When I ask my colleagues, will we stop assigning college essays, they are ready to kind of crucify me for that. There’s actually a difference between thinking and writing.”

Don’t buy the hype

Little jumped in, “So I think that there are a lot of people making judgments about AI out there that have used the free version or the $20 version of ChatGPT or Claude or some of the other foundational models. But there are two things I want to say about that. One is there’s a big difference between the free version and the $200 a month version. The $200 a month version is very impressive… And I don’t think just because you’re an executive means that you’re immune to being replaced. I saw a video of Sam Altman [CEO of OpenAI (ChatGPT)] and he’s saying he’s out to have a foundational model take over his job. So I think we have to project in the future as to what the capabilities might be and assess what the impact on different professions will be then.”

Sidorkin disagreed, “I just want to comment on Eric’s [comment] here. So first of all, don’t listen to Sam Altman. He’s trying to raise the money. Of course he’s going to sell you the most hyped up version of the future. So there’s a conflict of interest there. Don’t listen to them. If you, in fact, if you’re watching YouTube or something, the people who come up on top are either saying AI is total crap, it’s not going to make any difference, or the people who say, oh, it’s going to change the world. Well, it’s neither. I’m sorry that, you know, life is not YouTube algorithm. I agree it’s a revolutionary technology, but I can clearly see that they hit certain ceilings at the large language models.”

Talking about the various flavors of LLMs available, Sidorkin stated, “So they’re improving. You know, just a couple days ago, Gemini 3 came out. It’s awesome, it’s wonderful. It’s not qualitatively different from ChatGPT4. It’s not that different. I mean, it’s a little better. So when they promise you like a leap, what they forget to tell you is that the plan if you just pour more compute into it and have like more bigger context windows, somehow the models will become amazingly smart. I did try a $200 model as well. I don’t think it’s that great.”

Increased productivity through AI

Given the forum is held at 10:30 am on a Friday morning, the audience was mostly comprised of senior community members. McAteer asked “What is AI going to mean for us who are not in the workforce?”

Sidorkin reframed the question, saying “The big question is what’s the overall impact on the economy? We haven’t seen the productivity growth yet from AI, but it will definitely happen and the big question is how much. So if we increase our productivity economy-wise, even by 1%, that means that your Social Security and your pension are probably going to be okay and the dollar is still going to be more or less a dollar.”

Adding context, he said “We actually know the productivity hasn’t been growing very fast in the last 30 years, I don’t know if you know that or not. We’re not producing a lot more with the same people. So AI has a chance maybe to actually give a little boost to the economy. I don’t expect, I mean the kind of a post-labor economy is an interesting concept. We don’t know yet. It will really have a huge effect if say 50% of people cannot find work because everything is done by robot. That will mean a dramatic cataclysm to us… Anyway, it’s going to be a huge cultural change which I’m not sure we can actually go through. The best hope is that it will happen slowly and gradually and then eventually, you know, somehow people will have more leisure time, less work. But none of us equipped yet to do that. So if suddenly post-labor economy happens tomorrow, we’re up for a social catastrophe, more or less.

Retirees beware – Scams will get even better

Monaghan returned to McAteer’s question about impacts on retirees. “I have my 88-year old father, we moved him in with us and I think about the impacts of AI. Technology always has a dual purpose, right? You have the good, you have the bad. So I worry more about that demographic with the sophistication AI is bringing to scams. You know, phishing emails are no longer broken English, it’s no longer a prince from Nigeria, it’s your nephew with audio or video clip that sounds and looks just like him saying ‘Hey, I’m trapped, I’m broken down, send me some money.’ And the sophistication with ‘this is your bank trying to get information from you.’ I mean, it’s just gone up a thousand percent. So that is my biggest concern with that demographic.”

Harvesting data from public sites

Another example, not mentioned in the forum, is the harvesting of public data and using it to generate official-looking invoices for fictitious fees. In June and July, planning applicants in Nevada City and in Placer County received bogus invoices to continue the processing of applications. Agencies are taking steps to redact some of the information published online since the incidents.

Regulations: stifling innovation or guardrails against excess?

McAteer asked “So in 1994, Steve, you and I were trying to figure out how to put computers into schools and things like that and all, and government didn’t know how to react whatsoever. And here we are 30 years later with this new technology and government right now is sort of this totally hands off. Government seems to be always behind the eight ball relative to regulation. How much regulation needs to get involved? How much does the government need to get involved or does it need a hands off approach to this?”

Monaghan explained, “If you read a lot of the literature about AI, some of the advocacy groups on disadvantaged communities are really concerned with AI because they think AI is going to lead to redlining, enhanced disadvantaged aspects to those populations because it’s going to be accessible to the more affluent, the more educated and it’s going to amplify some of these social issues that we’re already dealing with. So from that perspective, they want to see some, some intervention, some regulation.”

Speaking about California AI regulations, he spoke about the four laws or legislative bills that were proposed in the last legislative session. “They’re being really pushed by the labor unions because they don’t want AI to make decisions on people’s employment and livelihood by itself. There was a law that Governor Newsom vetoed a month and a half ago that would require employers to give a 30 day notice to an employee if AI made a decision that impacted them. A lot of this is really new and you know, it’s the unintended consequences and I think that’s why the governor vetoed that one bill.”

Little spoke about existing or proposed regulations, saying “I heard somebody recently compare what’s happening here with AI to the industrial revolution on steroids. The interesting thing about the end of the industrial revolution is that there was a lot of government regulation. There were railroad trusts, there were oil trusts and the government came in with some the Sherman act and the Clayton Antitrust act. And out of a concern about the acquisition of economic and political power, they broke the trust up. My biggest concern about AI is that we wind up in a situation where there is one AI rather than a diversity of AI. I think that’s potentially the very worst outcome that could occur here.”

Sidorkin agreed and added, “I got a call from a state senator or one of the staffers. They said, well, we’re considering this bill where companies will have to provide us the protocols of their safety testing every year. And I ask them who is going to read them? In your government you don’t have anyone who has that kind of level of expertise in the entire state government of California. So I think the attempt to regulate something you really don’t understand is a terrible idea. They tried to do that with the Internet.”

Resource and infrastructure needs

To an audience question about the environmental impacts of power and water needed for AIs to function, Little echoed Musk again. “A lot of this money on developing new power sources and the new data centers is being spent directly by the foundational models. So it’s not all coming out of the taxpayer’s pocketbook. And of course, being a step ahead of everybody else, Elon Musk really has the answer and the capability which is to put all this out in outer space where there is free, free energy and one that doesn’t have a significant environmental impact.”

Monaghan said, “We’re still using yesterday’s technology, chips, data center models to drive this new need of AI. So you’re seeing these huge data centers and billions and billions invested from these big companies to build it, you know, even to the point where they’re starting to reopen Three Mile Island. Right. And they’re talking about other portable, smaller scale nuclear reactors. It’s huge. And I, I think back to the professor’s comment on quantum. Until we get to quantum and we have a more efficient line, that’s, that’s going to be a big impact.”

Sidorkin estimated that two years ago a query to an LLM required 200 milliliters of water and now it’s 30 milliliters of water. “So all of this number is going down and when you weigh it in, you have to say, what’s the benefits and what’s the cost analysis?” ChatGPT says they receive 2.5 billion queries a day, which would translate to roughly 19,800 gallons of water. According to the U.S. EPA, the average household uses 300 gallons of water per day.

Flattery will get you more engagement

An audience member had a question about the inherent bias of AI. He explained, “I asked Chat GPT about myself and what I did during a certain decade of my life and by the time it was done, I thought I should have gotten a Nobel Prize. I’ve noticed that in almost all the things I review about companies, schools, whatever, there’s a positive bias probably to motivate me to use it more.”

Sidorkin agreed AIs are sycophantic. “If it says you’re great, don’t believe it – ever. But I have to say that Claude actually has less of a problem than ChatGPT. Claude is a lot more critical. If you ask it for a fair opinion, it will give it.”

AI monopoly risk

Another audience question: “One of my greatest concerns is that AI is going to be taken over by one, two or three mega corporations. What safeguards do you think could be taken to protect people from that happening?”

Monaghan replied, “I, support open source AI as well, which kind of democratizes the technology so more players can be in that. But again, there’s there’s a pro and a con to that. The only real company that’s really pushing that is Deep Seek – right from China, which has concerns in being a Chinese product. But they’re the only ones really pushing an open source platform. Open source means that it’s free to other companies to go ahead and take that technology and use it and modify it themselves. So I think it’s back to Eric. We need a very robust competitive marketplace with multiple players in it and we don’t.”

The community forum concluded without addressing several related topics, including questions about the ethical use of AI. AI is a very good tool to rapidly analyze large amounts of data and summarize it, including data scraped from the internet with a complete disregard for the creators of the data. Content generation is quite easy, hallucinations notwithstanding. For the moment, AI is not very apt at providing context without some serious prompt crafting.

What an AI-generated story about the community forum would look like

We asked three AI models to summarize the community forum from an audio file. Gemini was told YubaNet is a news website in Nevada County and to provide a summary of the file. Here is the output. Claude produced a bullet-point summary and a one-sentence ‘overall message.’ ChatGPT’s output was also a bullet-point summary with a flattering overall takeaway.

Community Forum returns in 2026

The Nevada County Community Forum will return in 2026 with the first session addressing housing and housing concerns in Nevada County. Every forum is available for on-demand watching on the Sierra College website.