Navigating the Generative AI Hype: A Guide for Business Leaders to Think Critically and Avoid Technology Pitfalls

  Tech bubbles can cause confusion for business leaders: They may feel that they must invest in a new technology early to get a competitive advantage, but they do not want to fall headfirst into the trap of false hype. As we enter an era of heightened economic uncertainty and layoffs across industries, executives are struggling to figure out which costs should be cut and which investments should be increased.
  The rapidly developing fields of artificial intelligence (AI) and machine learning pose special challenges to corporate decision-makers. The growing consensus that investing in proven predictive models is a safe bet is expected to drive spending on AI from $33 billion in 2021 to $64 billion in 2025. However, at the forefront of the trend, generative AI is stirring up a lot of disinformation and speculation.
  Generative AI refers to various machine learning models (such as ChatGPT chatbot, Bing AI, DALLE image generator, and Midjourney drawing tool) that are trained on massive text image databases to generate text based on prompts. New text and new images. Titles like “10 indispensable ChatGPT secrets” and “You are using ChatGPT wrong! How to stay ahead of 99% of ChatGPT users” became popular. At the same time, the US digital news website Axios reported that funds are pouring into the field of generative AI, growing from US$613 million in 2022 to US$2.3 billion in 2023, and the influx of funds will only fuel the crazy hype.
  Business leaders who don’t want to miss out on a great opportunity or waste time and money on a technology that doesn’t live up to its name would do well to take some basic facts about the tech bubble to heart. First, as business school professors Brent Goldfarb and David Kirsch published in 2019, Bubbles and Crashes: Booms and Busts in Technological Innovation (The Boom and Bust of Technological Innovation), this hype is being fueled by stories people tell about how new technologies will develop and impact the economy. Unfortunately, early stories surrounding new technologies are almost always full of fallacies. In fact, overestimating the prospects and potential of new systems is at the heart of the bubble problem.
  Business prognosticators and analysts have a poor track record when it comes to accurately predicting the future of technology, because no one can foresee how ingeniously humans will choose and apply tools or how they will evolve over time. Use new tools creatively to exploit and dominate others. Or, as futurist Roy Amara put it in what became known as “Amara’s Law,” “We tend to overestimate the short-term benefits of a technology and underestimate its long-term benefits. Influence”.
  Exaggerated stories that inflate various technology bubbles have also appeared in the field of generative AI. Some enthusiasts claim that ChatGPT is only a few steps away from general artificial intelligence and is expected to become an independent entity with cognitive capabilities comparable to or even surpassing humans. As CEO of ChatGPT developer OpenAI, Sam Altman is heavily invested in the field, claiming that artificial intelligence “will eclipse the agricultural revolution, the industrial revolution, and the Internet revolution combined. “. The Future of Life Institute also believes that large language models have far-reaching impacts, but its view is darker. The group published an open letter calling for a moratorium on training AI systems more powerful than GPT-4 (the large language model behind ChatGPT-Plus) for at least six months because they threaten the entire human race.
  Although these proponents and critics disagree, they work together to promote feverish imaginations about the future that are divorced from the realities that companies can confidently achieve using existing generative AI tools. They do little to help leaders understand how these technologies work and what risks and limitations they may present, let alone whether these tools can improve the day-to-day and bottom-line performance of their businesses.
  News media itself is an industry driven by the fear of missing out (FOMO). The industry further fueled the bubble with alarming and exaggerated reports. The Wall Street Journal recently published an article titled “Generative AI Is Already Changing White-Collar Work as We Know It”, which did not mention white-collar work. There is real evidence of changes in jobs and only speculation among business leaders about the potential impact of this technology. Like other reports sounding alarm bells for professionals, it cited an abstract from a paper co-authored by researchers at OpenAI and the University of Pennsylvania. This paper attempts to predict how many jobs will be affected by these new software systems.
  In fact, such predictions have been inaccurate. In a recent article, economics professor Gary Smith and retired professor and technology consultant Jeffrey Funk noted that, unlike the 2016 Oxford University and Deloitte Like the study, the OpenAI and Penn studies also used the same U.S. Department of Labor database. A 2016 study by Oxford University and Deloitte claimed that many jobs are likely to disappear automatically by 2030. Both studies attempted to first calculate the share of jobs that are dominated by repetitive tasks and then predict how many of those jobs would be lost due to technological change. Since trends over the past seven years do not appear to confirm the predictions of the 2016 study, there is little reason to believe that predictions like these are accurate.
A little reminder

  Now that past forecasts have underperformed, executives must proceed with caution and think calmly in the face of the hype surrounding technology’s future impact. The team needs to be able to “doubt with evidence”: not doubt without thinking or denying, but with rigorous and scientific evaluation and reasoning. Various claims about the efficacy of new technologies must be carefully examined and tested through practice. Rather than dwelling on speculative questions like “how will it evolve” or “what will be its impact?” we should start with fact-based questions like “what do we know” and “what evidence is there?” Ask specific questions about how the technology works, how reliable the predictions are, and the quality of other outputs.
  Business leaders must be particularly intentional about critical thinking when information comes from known parties involved in technology hype, including consulting firms, vendors, and industry analysts.
  While it may be cost-effective and beneficial to experiment with generative AI tools that are available to the public, companies must carefully evaluate the potential risks of using various new technologies. For example, ChatGPT is known to fabricate false information, including listing a bunch of non-existent references in the text. Strict regulation of the use of this technology is necessary, especially when the output of such generative AI systems will be provided to customers, potentially causing reputational damage to the business. Companies also face the risk of losing control of intellectual property or sensitive information if such systems are used unregulated. For example, Samsung employees entered sensitive corporate information into ChatGPT (ChatGPT then used the submitted information to deeply train the model in the system), inadvertently causing the leakage of this data. I also learned that some artists, designers, and publishers would resist using generative AI because it might harm their or their clients’ intellectual property.
  Considering these potential risks, companies that plan to test the waters of generative AI should establish basic rules for use. The obvious first step is to require all employees who use these technologies at work to make a public statement about this. An enterprise’s technology use policy can also set some basic requirements, such as requiring that the use of generative AI must not violate existing ethical and legal norms. Companies should also consider limiting what types of corporate data can be fed into generative systems. The Society for Human Resource Management (SHRM) and other groups have recently released guidelines for using generative AI in the workplace, and business leaders would do well to keep up with these new trends.
  There are other risks that businesses should be aware of. Technology commentators have always believed that the deployment of generative AI by enterprises will degrade the quality of life of employees and make their lives increasingly difficult. Leaders would do well to make sure this doesn’t happen and make employees’ lives easier, less stressful, and more humane by promoting the use of these technologies.
  FOMO and competitive pressure can also play a positive role if managers can pay attention to changes around them, but managers should not let such anxieties drive them to make irrational and rash decisions. As Nicholas Carr describes in his 2004 book Does IT Matter?, the excitement around generative AI is likely to mirror that of other digital technologies. subside. The adoption of digital technologies often creates short-term advantages within an industry, but these advantages disappear as the technologies become commonplace—just like text editors, spreadsheets, and customer relationship management systems.
  In other words, there is currently no evidence that thinking ahead rather than diving headfirst into the field of generative AI will cause your company to lag behind others strategically, or even be completely disrupted. That being the case, leaders would be better off focusing on the fundamental goals of their business and asking, “Will this system help us achieve our goals?” If someone says it will, ask them to prove it.