AI enhances a person's productivity making them for useful to a company, it does not replace the person. Just like all the technology trends that came previously -- the technology is an enabler. Take a look at the banking industry -- at some point they went to doing everything on paper by hand to having everything on a computer. The employees who made the transition to using computers were rewarded with long careers; the few who were incapable of making the transition were let go. The salaries in the banking industry went up and people became more productive and the banks earned much greater profits -- all thanks to the transition to computers. The overall banking employment increased significantly due to the technology change rather than decreased.
That is a myth. That is true of old AI tech. Keep the blinders on. It has already replaced people. I will check back in 5 years and rub it in your face. You just don't understand the tech. Cold callers and customer service people handling calls are no longer needed just as one example. https://synthflow.ai/ https://www.bland.ai/ In many cases, there will just be a few people overseeing the AI.
@gwb-trading This article is really old in AI terms. It is from Feb 28, 2024. https://www.forbes.com/sites/jackke...om-ai-and-which-professions-are-most-at-risk/ What White-Collar Jobs Are Safe From AI—And Which Professions Are Most At Risk? The accelerated ascendancy of artificial intelligence has created a booming new tech sector offering plentiful opportunities for growth. Understandably, there are grave concerns about how this fast-emerging technology will impact the job market. Most notably, white-collar workers are fearful that they may be made redundant, as AI poses a threat to their job security. Like robotics impressed upon the blue-collar labor market, in factories and warehouses, “AI is on a collision course with white-collar, high-paid jobs,” CNBC reported. AI is distinguished from past technologies that have come over the last 100-plus years,” said Rakesh Kochhar, a senior researcher at Pew Research Center. “It is reaching up from the factory floors into the office spaces where white-collar, higher-paid workers tend to be. Investment bank Goldman Sachs predicted in a 2023 report that the workforce in the United States and Europe would be upended, with 300 million jobs lost or diminished by this fast-growing technology. In a recent survey conducted by ResumeBuilder, 37% of business leaders revealed they have already begun to replace staff with AI. Nearly half (44%) of the executive respondents stated they anticipate further jobs cuts in 2024 due to AI efficiency. Several employers have already enacted headcount reductions this year. Additionally, it looks like they plan to reallocate the money saved from downsizing toward investments into AI, machine learning, automation and bringing aboard experienced professionals in this space. The redirection of funds and resources to this fast-emerging technology adds another layer of job risk for white-collar workers. White-Collar Jobs That Are Less Likely To Be Impacted By AI Roles that require a significant social or emotional component are less susceptible to automation due to the human element involved, such as therapists, counselors, social workers and teachers. Additionally, high-level white-collar workers that are responsible for making complex business decisions are less likely to be displaced by AI. Customer-facing positions, such as salespeople who need to engage and build relationships with clients, are safe from being made obsolete by this technology.
I perfectly understand technology. I am very familiar with technology in Contact Centers. BTW -- contact centers are hiring and AI is assisting them in being more productive thus expanding their workforce while being able to handle more volume. Capabilities from AI such as deciding the Next Best Action, automatic research from conversation context using Natural Language Processing coupled with Predictive & Adaptive decisioning (e.g. what credit card to offer the customer) have enhance contact center operations and have driven hiring. And keep in mind that over 90% of the people contacting a service center via chat or phone typically try to get to an live operator asap -- doing their utmost to bypass all the automated prompts and features which normally don't solve their question or issue. Automated chats with AI agents are already considered one of the most frustrating customer service experience on the face of the earth. Using AI to assist an live agent works out well -- using AI to replace a live agent is a customer service fail. A few CEOs have made noises about replacing people with AI -- most have found that concept is not working out. The CEO of Duolingo is the latest character claiming he will replace all of his contractors (the company primarily is contractors) with AI. We will see how this works out for him about a year from now.
You don't understand the technology. You are looking in the past. You are seeing what is and not what will be. The tech I am talking about is being built right now. A lot of it will be here by the end of 2026. The automation and AI revolution is just really starting. So, I am talking about the future not the current state right now. Do you honestly thing that a company like JPM would announce right now that they will replace 50% of its workforce with AI in the next 5 years? Of course not! That would kill moral and destroy the company.
Companies are looking at the future as well. They are seeing which tasks can be done with AI and if AI is actually effective in those tasks (without hallucinations, etc.) While there are a small number of CEOs making noise about how they are going to be replacing large-scale numbers of employees with AI (which the media eats up); most executives (CEOs, COOs, CTOs, CIOs, CFOs, etc.) are finding that AI supplements what employees do today and will enhance their productivity rather than replacing employees. Not to mention all the new jobs that will be created to maintain and train the AI.
Ok put a number on unemployment in the US in 2026. We can see if you are right noting of course that tariffs are already a positive catalyst for unemployment ( not AI ). My bet it'll be barely a story in 2026, kind of like the whole Canada will lose big something narrative. It's pointless. We see the worst long term forecasts on this site. The McDonalds will be out of business in 5 years idea, or the SPX is going to 300, or Covid lows will be broken imminently.
That is impossible right now because there are too many unknown variables at this point. After all the trade deals are inked, I would be more comfortable calculating probabilities. All I know is that AI will start replacing employees probably starting on a larger scale by the end of next year. The tech is not here yet. There are too many security issues currently that have to be solved. Once those are solved, there won't be many jobs left if nothing changes. It will wipe out most of the middle class in the US if nothing is done.
In most cases with large companies, it won't create any jobs. The current dev team in place is all that is needed. All you need is the data. They have already identified the issue. This will be fixed soon. There are other security issues that are a much bigger problem than hallucinations. These are being worked on now. https://www.anthropic.com/research/tracing-thoughts-language-model Hallucinations Why do language models sometimes hallucinate—that is, make up information? At a basic level, language model training incentivizes hallucination: models are always supposed to give a guess for the next word. Viewed this way, the major challenge is how to get models to not hallucinate. Models like Claude have relatively successful (though imperfect) anti-hallucination training; they will often refuse to answer a question if they don’t know the answer, rather than speculate. We wanted to understand how this works. It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit (see also this recent paper for related findings). This allows Claude to answer the question when it knows the answer. In contrast, when asked about an unknown entity ("Michael Batkin"), it declines to answer. By intervening in the model and activating the "known answer" features (or inhibiting the "unknown name" or "can’t answer" features), we’re able to cause the model to hallucinate (quite consistently!) that Michael Batkin plays chess. Sometimes, this sort of “misfire” of the “known answer” circuit happens naturally, without us intervening, resulting in a hallucination. In our paper, we show that such misfires can occur when Claude recognizes a name but doesn't know anything else about that person. In cases like this, the “known entity” feature might still activate, and then suppress the default "don't know" feature—in this case incorrectly. Once the model has decided that it needs to answer the question, it proceeds to confabulate: to generate a plausible—but unfortunately untrue—response.