GPT-4, OpenAI’s most powerful artificial intelligence large language model (LLM), is available through a subscription to ChatGPT Plus, which costs $20 a month. If you’re looking for more accurate responses to your queries, AI-generated images, web browsing, data analysis, and access to GPT bots all in one place, ChatGPT Plus has proven itself superior to the publicly available GPT-3.5 in the free ChatGPT.
ChatGPT uses natural language processing (NLP) to understand, interpret, and mimic human language. OpenAI describes GPT-4 as “10 times more advanced than its predecessor, GPT-3.5. This enhancement enables the model to better understand the context and distinguish nuances, resulting in more accurate and coherent responses.”
This is, in part, because GPT-4 is a much larger language model than GPT-3.5, which means it has a bigger number of parameters. These parameters represent bits of information that the model uses to understand and generate text.
OpenAI doesn’t reveal the exact number of parameters used in GPT-4. However, according to Andrew Feldman, CEO of AI company Cerebras, GPT-4 was trained using around 100 trillion parameters. That’s an order of magnitude greater than GPT-3’s 175 billion parameters.
GPT-4 also has a longer memory than previous versions, adding to its ability to learn complex language patterns. While GPT-3.5’s short-term memory is around 8,000 words, GPT-4’s short-term memory extends to somewhere between 64,000 words and 128,000 words.
Upgrading your ChatGPT account to a Plus subscription gives you access to key features beyond general content creation, including priority access to new features, faster responses, access during peak times, and access to custom GPTs.
Since you can only use custom GPTs with GPT-4, access to the GPT Store is limited to ChatGPT Plus subscribers. The GPT Store features customized AI chatbots powered by GPT-4 that offer different skills, training, and custom instructions.
OpenAI replaced ChatGPT plugins with custom GPT bots, which perform much like the plugins did.
GPT-4 is not only more powerful than GPT-3.5, but it’s also multimodal, meaning it’s capable of analyzing text, images, and voice. For instance, GPT-4 can accept an image as part of a prompt and provide an accurate text response, generate images, and be spoken to and then respond using its voice. These are all important factors when deciding between ChatGPT and ChatGPT Plus.
For example, ChatGPT Plus can “view” an image of your refrigerator contents and provide you with recipes using the ingredients it sees. ChatGPT Plus subscribers can also upload documents for GPT-4 to analyze and make inferences or summaries. By default, ChatGPT Plus will answer you with text, but you ask it to generate an image for you.
When you ask GPT-4 to generate images, the LLM will leverage DALL-E 3’s capabilities to do so; there’s no need to switch to another website or modality. This image generation feature is only available in ChatGPT Plus.
The free version of ChatGPT cannot access the internet. ChatGPT Plus, however, can do so and thus can provide more accurate answers. OpenAI integrated a “Browse with Bing” feature into GPT-4, which means that the AI chatbot can provide up-to-date information on current events.
When you ask GPT-3.5, or the free version of ChatGPT, a question, like, “What is the most powerful iPhone?,” it will likely say the iPhone 13 Pro Max, which launched in 2021. This is because it has only been trained on information leading up to January 2022, so its knowledge is limited to events before this knowledge cutoff.
If you were to ask GPT-4 the same question, it would likely respond with the iPhone 15 Pro Max, Apple’s latest high-end iPhone that debuted in September 2023.
Gemini 1.0 Ultra didn’t quite live up to Google’s hype; it lagged behind GPT-4 Turbo in a number of ways. But Claude 3 seems to be the real deal.
Like Gemini, Claude 3 comes in three versions—Opus, Sonnet, and Haiku. Opus is the most capable but also the most expensive to use. Anthropic published benchmarks showing Claude 3 Opus edging out its leading rivals, GPT-4 Turbo and Gemini 1.0 Ultra, on a variety of performance benchmarks.
For example, Claude 3 achieved 86.8 percent accuracy on the Massive Multitask Language Understanding benchmark. That compares to 86.4 percent for GPT-4 and 83.7 percent for Gemini 1.0 Ultra.
Claude did better than GPT-4 on some of my tests and worse on others. Overall, the models’ performance was too similar to pick a winner. But that in itself is notable. Until now, OpenAI has indisputably had the world’s best-performing foundation model. Now, GPT-4 finally has some serious competition.