‘Why Use ChatGPT:’ Author Of ‘Black Swan’ Says OpenAI’s Chatbot Requires Deep Expertise, Makes Mistakes Only A ‘Connoisseur Can Detect’

    Date:

    Loading…

    Loading…

    Nassim Nicholas Taleb, renowned author of “Black Swan,” has once again voiced his skepticism about OpenAI’s ChatGPT, stating that the functionality of this AI-powered chatbot comes with a condition. 

    What Happened: Over the weekend, Taleb took to X, formerly Twitter, and posted his “verdict” on ChatGPT, stating that the chatbot is only usable if one has in-depth knowledge of the subject. 

    He went on to point out that ChatGPT often makes errors that can only be detected by a “connoisseur,” citing an example of an incorrect linguistic interpretation.

    “So if you must know the subject, why use ChatGPT,” he asked.

    See Also: How To Use ChatGPT On Mobile Like A Pro

    He went on to say that he uses the chatbot for writing “condolences letters” and it fabricates “quotations and sayings.”

    In the comment section, people suggested Taleb consider ChatGPT as a sophisticated typewriter instead of a definitive source of truth. One person said that OpenAI’s AI-powered chatbot is not the “smartest assistant on the planet but you can correct and direct work to move faster.”

    However, some people also agreed with him saying that ChatGPT is “too risky” for certain work assignments. 

    Why It’s Important: This isn’t the first time that Taleb has critiqued ChatGPT’s limitations

    Loading…

    Loading…

    Last year, he highlighted the chatbot’s inability to grasp the ironies and nuances of history. Taleb has also expressed his frustration with the ChatGPT’s lack of wit in conversations. 

    The same year, it was reported that a lawyer’s use of ChatGPT for legal assistance backfired when the chatbot fabricated nonexistent cases

    However, during the same time, it was highlighted by several reports that not only ChatGPT but other generative AI models like Microsoft Bing AI and Google Bard, now called Gemini, tend to hallucinate and provide even made-up facts with utmost conviction.

    In fact, in April 2023, Google CEO Sundar Pichai admitted that AI’s “hallucination problems” saying, “No one in the field has yet solved the hallucination problems. All models do have this as an issue.” 

    Check out more of Benzinga’s Consumer Tech coverage by following this link

    Read Next: ‘Look What’s Happened’: Elon Musk Questions OpenAI’s Path After Nvidia’s AI Supercomputer Donation To ChatGPT-Maker In 2016

    Image via Shutterstock

    Loading…

    Loading…

    Go Source

    Chart

    Sign up for Breaking Alerts

    Share post:

    Popular

    More like this
    Related

    Days After Facing Lawsuit For Drug Use, Musk Says He Would Take Cocaine If It Boosted ‘Long-Term Productivity’

    Loading... Loading... Elon Musk recently expressed his views on drug use and...

    Anthony Scaramucci Labels Donald Trump’s Hush Money Trial As ‘Obligatory Loyalty Test’ For Republicans, Says ‘It’ll Never Be Enough For Donald Trump’

    Loading... Loading... Former White House communications director Anthony Scaramucci has characterized...

    Marjorie Taylor Greene Accused Of Racist Behavior By Jasmine Crockett Over ‘Fake Eyelashes’ Comment

    Loading... Loading... Rep. Jasmine Crockett (D-Texas) has accused her colleague, Rep....

    Where Will Shopify Stock Be in 5 Years?

    While E-commerce has lost some of its luster to...