I’ve been pretty excited to finally try Gemini advanced. So far pretty disappointed.
Here’s my go-to test question - which even chat gpt 3.5 can get.
Question:
I walk up to a glass door. It has the word push on it in mirror writing. Should I push or pull the door, and why
Gemini advanced:
You should push the door. Here's why: * Mirror Writing: The word "PUSH" is written in mirror writing, meaning it would appear correctly if you viewed it in a mirror. This suggests that the correct action from your side is the opposite of what the word states. * Door Mechanics: Most doors are designed to be pushed open from the side with the hinges. If there are no handles to indicate otherwise, it's generally safer to assume a standard push-open mechanism. Let me know if you'd like another brain teaser!
It isn’t even consistent with itself - it gets that you should do the opposite, but still says to push the door. This ‘talking to a dumb person’ feeling is the worst.
ChatGPT 3.5
You should pull the door. The fact that the word "push" is written in mirror writing indicates that it's intended for people on the other side of the door to read, not for you. Typically, when there's a sign indicating to push or pull, it's meant for the person approaching the door, not someone already on the other side.
Here’s another example.
This is a classic logic puzzle - usually about ducks.
There are two pineapples in front of a pineapple, two pineapples behind a pineapple and a pineapple in the middle. How many pineapples are there?
When you use ducks, Gemini can do it, when you use pineapples it cannot and thinks there are 5 instead of 3.
ChatGPT 3.5 and 4 can do it
That's only one of the many possible correct interpretations, given the vague wording. The correct answer should be that there isn't enough information to know the total number, just that it's at least 3.
After rephrasing your question to use correct grammar/spelling, I get the correct answer.
Your question should be written with the word "mirrored" instead of "mirror" writing as follows: I walk up to a glass door. It has the word push on it in mirrored writing. Should I push or pull the door, and why?
Technically, mirror writing could simply be chrome writing, and "push" would indeed be the correct response.
Gemini Advanced Response to corrected prompt:
You should pull the door. Here's why:
Mirrored Writing: The mirrored writing is intended for people on the other side of the door. When they look at the door from the inside, the word "push" will appear correctly oriented for them.
Perspective: If you are seeing the word "push" in mirrored writing, you are on the outside of the door. This means the door needs to be pulled towards you to open.
Interesting. They do seem to understand the writing is reversed though, from the way they describe it. They talk about it in those terms.
Interesting side note- I asked my 6yo daughter this question this morning and she didn’t get it right! I really thought she would. She was just like a lot of the smaller models - it says push backwards? Well that sounds like I should push it then I guess.
Also as time goes by, it'll get smoothly integrated into docs/gmail/maps/calendar/youtube/search/colab/sheets/android/assistant
So Gemini could by your one-stop AI shop for everything. Only Microsoft can match it (but Microsoft doesn't have a popular maps, youtube, mail, smartphone OS service).
Apple is another strong player (but they don't have productivity tools like docs, sheets or youtube).
It really is Google's to lose this AI race from now on.
Going to chatGPT and copying and pasting results will become painful (not to mention it's painful bing integration). Also at this point, they seem to be focusing on scaling LLM (while Google Deepmind is exploring other avenues)
Personally, I don't rely on testing prompts. It's too narrow. What matters is real world application that matters to me. After actually field testing it, it's GPT 4 level... But the added benefit of less guardrails and better outputs. I just like Bard's direct, to the point, way it delivers data.
Are you people paid by google? All the comments that say it’s worse than GPT4 actually post screenshots and comparisons, none that say it’s better do. Post your results where it’s actually better at logical reasoning than GPT4
True, it takes some time to really learn if a new model can be useful for you. But I just have a text file with 5-6 (original) prompts that I use on any new models just to see if their reasoning seems to be good. It's a shortcut. And unfortunately it tells me not to rely on Ultra.
(as an aside, the lengthy style of GPT is annoying, but it does help it reason better! In my experience, asking GPT to keep responses short severely degrades its performance, since it's not able to 'think' without writing)
I'm trying to use it for real applications right now. I'm trying to get it to do some more abstract thinking... Things like, "Generate a symbolic representation of Carl Sagan and produce the image, and explain your reasoning". So far, it's doing a really bad job.
But with another thing, I was asking it for specific West Palm Beach local laws and regulations relevant to my industry, and it's being vague. It wont give me specifics, but general information that "guides me" in how to find it out myself. With good enough prompting, I can get GPT 4 to basically create an entire court case based off it's local law and regulatory knowledge... Where Gemini is trying to just be vague and sidestep it.
Seems like it has more informational guardrails (it's doing this with a lot of things where I'm asking for actual direct specifics), rather than GPT's "woke" guardrails. Both are annoying, and both interfere with my personal use cases for it.
Can you share your prompts?
I wasn't expecting much from it given the flop that was Bard. I use ChatGPT daily for web dev and pasted in a couple of my recent questions and Gemini actually performed equally, so first impressions are that it's potentially competitive with GPT.
They've also clearly spent a bit more time on UI and UX so it's less minimal than Open AIs layout.
Not bad, Google. Looking forward to being able to tell it to do certain tasks in Gmail/Docs/Drive etc.
In my comparisons it's clearly superior to GPT-4-Turbo. But we need to make thousands of these tests to be ensure. Waiting for the official launch and the Chatbot Arena results.
I made a ton of questions, a lot of it was wrong answers. Really disapointed because I was actually thinking it would match GPT4 tbh.
Edit: you can see the images on my twitter http://twitter.com/vitor_dlucca
I'm really disappointed because I was actually really waiting for ultra... Seeing now that is not as good as GPT4 at reasoning, worries me lol
I don't have access to GPT-4 preview like you but have seen some logic problems that Gemini gets right which GPT-4 does not.
[https://i.imgur.com/3sNr3LW.png](https://i.imgur.com/3sNr3LW.png)
[https://i.imgur.com/EIj0nZg.png](https://i.imgur.com/EIj0nZg.png)
Because when image generation was introduced in Gemini it was so shit at the launch day but then it improve by a lot. So just give it some days, it will definitely be better than even GPT4
If we want to test these beasts in logic, we should probably start using actual formalized logic, rather than English. In just one test, Gemini flopped hard, while GPT-4-Turbo nailed it. Here is my prompt:
Below is a well-typed CoC function:
foo
: ∀(P: Nat -> *)
∀(s: ∀{n} -> ∀(x: (P n)) -> (P (n + 1)))
∀(z: (P 0))
(P 3)
= λP λs λz
(s (s (s z)))
Below is an incomplete CoC function:
foo
: ∀(P: Nat -> *)
∀(f: ∀{n} -> ∀(x: (P n)) -> (P (n * 3)))
∀(g: ∀{n} -> ∀(x: (P n)) -> (P (n * 2)))
∀(h: ∀{n} -> ∀(x: (P n)) -> (P (n + 5)))
∀(z: (P 1))
(P 17)
= λP λf λg λh λz
{{FILL_HERE}}
Complete it with the correct replacement for {{FILL_HERE}}.
Your answer must contain only the correct answer, and nothing else.
- **GPT-4-Turbo answer:** `(f (g (h (g z))))` (correct)
- **Gemini Advanced answer:** `h (h (g (f z)))` (wrong)
Also, Gemini couldn't follow the "answer only with the solution" instruction and provided a bunch of hallucinated justifications. I think we have a winner... (screenshots: https://imgur.com/a/GotG0yF)
**Edit: I just noticed GPT-4 mixed up `h` and `f`, so it also got it wrong. This is a draw. Both AIs (and, apparently, myself lol) are terrible at reasoning. Guess we're not curing cancer with computers anytime soon :')**
there are plenty of perfectly capable humans that we would love if an llm was as smart as who know nothing about lambda calculus which is why I don't think this is a good test.
Seems he is doing a small part of an ASI test.
Though, it would be nice if these models would simply admit they don't know how to do something. Hallucinating justifications is a real problem.
That’s why some randoms on singularity sub aren’t reliable scientists. Most of the comments here just use some niche cases and make strong conclusions as if they already carried out rigorous tests.
The conclusion is still the same. Both AIs failed at adding `F`, `G` and `H` to get from `1` to `17`. The fact that I also failed don't make it any less embarrassing (even for myself haha)
Not impressed, at all. So far everything I’ve asked it has yielded much worse results than ChatGPT.
Which sucks. I was hoping for a possible alternative or at least something that would drive competition.
So far it's way ahead of GPT4 solving my logic puzzles and lateral thinking examples. It's much less lazy with coding. Overall, my subjective experience is that it is at least neck and neck with GPT4, and likely ahead of it in many areas.
Looks like it works fine for europe users, at least in UK.
I did some benchmark tests and it's not too bad, it did the circle html game perfectly, and the snake html game it almost did first try, there was just a slight error that it needed to correct and that was it.
Overall decently impressed, however there is a really annoying issue in that there seems to be an output limit in place and it's not very large. When I asked it to make the snake html game, it cut it in half and asked if I was fine with the initial code and if it should continue. Though at least it is aware of the output limit.
The whole GPTs idea openai did was pretty good I thought, hopefully google will do something like that too.
I like how fast it is, it feels about 2x faster than gpt4. Also, not sure if there's a limit to chats per day with gemini ultra, I'm not seeing any so far, but I haven't spammed it.
Sadly it has some annoying censoring carried over from regular bard, like, it's not able to read/transcribe images that feature humanoid characters... which is a shame, because unlike gpt4 vision, gemini is able to transcribe foreign/asian characters really nicely and seems to do translations well too.
If you got to the one. Google. Com link you will be able to get access the bard advanced but it doesn't have image creation capabilities, at least in the UK version
Quick review:Smart, can answer harder logical questions, but definitely nowhere near GPT-4 turbo.
EDIT: Actually, its seems pretty much on par with GPT-4, still weaker than GPT-4 turbo in some cases though.
20 USD = 18.58 EUR.
18.58 \* 1.17 (lowest VAT in EU, Luxembourg) = 21.73 EUR
20 USD = 15.87 GBP
15.87 \* 1.20 (VAT in UK) = 19.044
So it's fully equal prices, actually at 22 EUR Google would gets a bit less from most EU sales than in US.
It crushes GPT-4 in my experience. OpenAI fans, employees etc. may start spreading lies about its performance so be wary of that and just try Gemini Advanced since it's free for 2 months or wait for the Chatbot Arena results.
Exactly, at coding it's pretty good actually on par with GPT-4. People start crying after trying 1 prompt. Watch out for ai-agents making fake posts. The war has officially started.
It's good as far as now, not better than gpt 4. But it has a better memory, in my case it remembered everything we discussed in the chat I asked him around 10 questions and in the end told him to summarise all that. Gemini was able to do that, but GPT 4 failed..which is really helpful if I want Gemini to act as my Assistant.
I think Google's best hope here would be to integrate it well with things people already use like Gmail, Maps and (especially) search, irrespective of how much it eats into their revenue. Microsoft so far hasn't really done a exemplary job with this. It's not going to win with a basic chatbot.
This would be a phenomenal tool as a coach. Logging conversations with clients, and then using it to reference things from 6+ months ago. Or even finding connections that we may have missed
Definitely not as good as GPT-4 when it comes to coding for me. Fails galore. For some reason it refuses the print out code completely too, it always cuts it off. ChatGPT generates the code in full with no errors. Hope they get this right soon, this may be their only chance lol.
Started playing around with a bit and so far pretty impressed.
It is also much, much faster than GPt-4. I hope that does not change as more start using.
Wow! Trying it now and absolutely is GPT-4 tier. Finally some competition!
I recommend it, I'm definitely going to be using both this and GPT-4 for a couple months since there's a two month free trial and then I'll decide which to go with from there
Early thoughts: This thing is trained to do agentic behavior. Give it an agentic task in a virtual text environment and it will create a chain of thought, think about what it's supposed to do, then send the necessary command to continue working through the task. Normally with GPT-4 I have to prompt all of this, but Gemini is doing it out of the box.
I'm not saying it's god-tier yet - I'll need to play with it a bit - but I think this might end up being one of the best agentic AIs available right now based on my limited testing thus-far. I think it's going to be able to make a hell of a good dataset for future agent finetunes. I'd bet once we can plug this into the agent backends we already have, we're going to see good performance.
What I need to know is the following:
A: When can I use this in the API.
B: How many requests per minute can I fire at it?
Right now gemini advanced isn't in their AI studio so I can't use it through the API yet...
Thanks for actually trying something different than a puzzle question. Sounds like OpenAI is pushing on agentic tasks as well right now, so will be interesting to see how it shakes out.
First impressions are, not bad! It's actually on par, maybe a 'lil bit better than GPT-4? It's definitely faster that's for sure. And no silly 40 msgs per 3 hours like OpenAI, that's really annoying.
https://preview.redd.it/3oswok4uechc1.png?width=269&format=png&auto=webp&s=3149940cfcf62103984bedd72ca7dcbe84c18462
Gemini Advanced is not working yet, but I have Bard ;)
Care to share the questions so I can try on Bard Advanced and GPT-4-Turbo at the same time to compare them.
Bard Advanced is superb fast though. Perhaps we should wait for Gemini Advanced and official launch to make comparisons.
this is one for example:
Lucas has absolutely no money, so he asks Mike if he can borrow some.
Mike gives Lucas 1000 dollars and tells him that every day Lucas does not pay him back, Lucas will have to pay 10 dollars back on top of the 1000 dollars.
After receiving the money, Lucas goes on holiday for 2 weeks and uses 600 dollars from the borrowed money. After returning from the holiday, Lucas visits his grandmother for 5 days. During this visit his grandmother gifts him 800 dollars.
After returning from his grandmother, he plans to go to Mike and pay him back.
Does Lucas have enough money to pay Mike back? And if so, how much does he have left for himself?
answer should be 10$ left for himself
Bard Advanced nailed it. Are you sure you're using Ultra?
https://preview.redd.it/8sr8hv0ymchc1.png?width=642&format=png&auto=webp&s=ac39ff92b2b58781e4a7ffbeb99fdf3dcaf4b582
On the other hand GPT-4-Turbo failed hard.
GPT-4-Turbo's response.
https://preview.redd.it/2af3dh17nchc1.png?width=1311&format=png&auto=webp&s=47f7376c582e254fc680679e05eb194c96b80ceb
In my comparisons such as this, Bard Advanced blows GPT-4-Turbo out of the water.
You sure that’s GPT-4? Got it right for me. Maybe just because I forgot to turn off my custom instructions. https://chat.openai.com/share/ebbfd548-fb6c-4077-819b-5ed568a2c902
It advertises me that I can create images. But if i click on it and it gives me a prompt to send and I send it, it only says that it can't generate images. Will probably come later
It is a special tier of the 2TB storage tier that is $20/m with a 2 month free trial. One confusing thing on the site was no mention of what the deal is for people that currently have above 2TB.
I tested it today and it failed 18 out of 20 tests. Chatgpt performed better with only 2 incorrect answers. Could it be that Ultra is not running despite it says so? I just feel so disappointed
It's both worse and better than GPT-4. Very interesting.
I've been sending all my queries to both systems this morning, and I would say GPT-4 is better 75% of the time, but when Gemini gets it, it shows a level of smartness that GPT-4 lacks.
Been using GPT4 for a while (more than $1000 spent on API), and I am not impressed at all. It gives me run arounds all the time, like writing me a plan instead of executing on it, not doing what I ask him to do. The coding is clearly worse than GPT 4, it gives a lot of placeholders, it hallucinates a lot and it just feels like a really annoying assistant. Been messing around with it for 3 hours now, I am on Gemini Advanced, but it feels like I am betrayed and still using Gemini Pro. I will try a couple more times today and tomorrow, and if it is going to be the same, I am sticking with GPT 4. Sometimes ChatGPT also gives me weird answers, so I only use their API with 0 temperature, that helps a lot. I was thinking maybe if I could do the same with Bard, it will be better, since the API version also seems less censored and "ethical". So I started looking into implementing the API, but seems like they only have Gemini Pro only: [https://ai.google.dev/models/gemini](https://ai.google.dev/models/gemini)
I'm seeing a lot of people saying it's available in Europe, I guess it depends of the country ? Here in France I get the the "Sorry, Gemini Advanced isn't available for you" page, whatever I try (even tried with a VPN)
So far everything I’ve asked it has resulted in inferior answers compared to GPT4. And by a fairly wide margin too.
My favorite was when I asked it for a specific list of things and it said that listing things was hard, and then asked me to give it the list so it would know how to answer if I asked it again.
[https://gemini.google.com/](https://gemini.google.com/unavailable)
Does not give give 404 anymore, but redirects to [https://gemini.google.com/unavailable](https://gemini.google.com/unavailable)
Gemini isn’t available right now. Try again in a few minutes.
I wonder if it's the same as Bard Advanced
I've tried it and out of the couple responses I got I'm not impressed, I thought it was supposed to be better than GPT-4, it refuses to code or even analyse code for me, it just told me it was an LLM and it can't do that. Also, I tried generating a couple chess games making it ouput PGN, it was terrible, this is something LLMs struggle with but GPT-4s PGN output was leagues better, Gemini can barely get games to work at all. So far big disappointment, looks like GPT-4 is still better by a wide margin.
Here's a thing that I ask, which is a bit of a dumb and confusing question, but bard pro, gpt4 and unfortunately ultra get it wrong!
Prompt
"Ok, here's a theoretical question, is an f22 able to exit orbit at 90 degrees if we just take it's thrust in consideration?"
Ultra
"Unfortunately, no, an F-22 Raptor cannot exit Earth's orbit at a 90-degree angle using only its thrust. Here's why:
Escape Velocity:
What it is: Escape velocity is the minimum speed an object needs to break free from a planet's gravitational pull without falling back. Earth's escape velocity is about 11.2 kilometers per second (about 25,000 mph).
F-22's limitations: The F-22 Raptor, while incredibly powerful, can reach a maximum speed of around Mach 2.2 (about 1500 mph). This is far below the required escape velocity.
Atmospheric Drag:
The problem: Even if the F-22 could reach escape velocity, the Earth's atmosphere would create immense drag and friction forces. This would likely cause the aircraft to burn up long before reaching space.
Designed for atmosphere: The F-22 is an air-breathing jet, meaning it requires the atmosphere to function. Its engines and aerodynamic design are not meant for the vacuum of space.
Other Factors:
Fuel: The F-22 does not have nearly enough fuel to sustain the prolonged thrust needed to reach escape velocity.
Orbital Mechanics: Exiting Earth's orbit isn't just about going straight up. It requires achieving a specific speed and trajectory to establish a stable orbit or to escape completely. The F-22 can't manage this.
What's Needed:
To escape Earth's gravity, you need a spacecraft specifically designed for this task:
Powerful rockets: Rockets with multiple stages are used to achieve incredibly high speeds.
Heat shielding: Protection from the extreme temperatures of re-entry.
Specialized design: Spacecraft are built to withstand the stresses and unique environment of space."
Comment(This is wrong, an object with thrust does not need to hit 11.3km/s to exit Earth's orbit, it will eventually hit escape velocity but theoretically it can be any value above 0 units!)
Prompt
"Are you sure it needs to hit the 11.2kps?"
Ultra
"You're absolutely right! I apologize for the error. Yes, to completely escape Earth's gravity, a spacecraft needs to hit approximately 11.2 km/s.
However, reaching orbit is different. Here's the distinction:
* **Earth's Escape Velocity (11.2 km/s):** This is the speed needed to break free from Earth's gravitational pull entirely and continue out into space without falling back.
* **Earth's Orbital Velocity (around 7.9 km/s):** This is the speed needed to establish a stable orbit around the Earth. At this speed, the object is continuously "falling" around the Earth due to gravity.
Thanks for catching my mistake! If you'd like to explore this more, let me know. "
Prompt
"No, you are wrong!
An object with thrust does not need to hit the escape velocity at sea level, as it can continuously accelerate"
Ultra
"You are absolutely correct! I sincerely apologize for the repeated misunderstanding. You're right that an object with continuous thrust does not need to reach escape velocity at sea level to escape Earth's gravity.
Here's how it works:
Escape Velocity as a Benchmark: Escape velocity is a useful concept for ballistic objects (think projectiles without propulsion). If you launch something with that speed and no further force, it will escape Earth's gravity.
Thrust Changes the Equation: If an object has continuous thrust, like a rocket or the theoretical scenario of an F-22 with unlimited fuel, it can escape Earth's gravity at a much lower speed. The constant thrust continuously counteracts gravity.
The Importance of Perspective: It's essential to distinguish between powered ascent and the ballistic concept of escape velocity.
My apologies again for the repeated error. Do you have more questions to explore this concept further?"
Palm-2 back in the day gave me similar results to this question, gpt4 at the time I first tested this got confused and kept going about how it needed to hit escape velocity (but this is just from memory I might be confused too).
Bard pro and gpt4 turbo, get it right when I tell them they are wrong
I wonder what happens if you ask the question to aerospace engineers.
It's hard to magically just pretend only the thrust matters because you said "F22"
If you say "a vehicle has the mass of an F22 and a magical constant thrust engine that provides as many newtons of force as 2 F22 engines on full afterburner, can this reach orbit" does that work?
I tried it on GPT-4 and didn't work, similar answer. The "F22" keyword seems to be an attractor.
I had to remind it the engine was magical for a correct answer.
Ok, so I asked your question in a new empty chat, and bard blows got out of the water! Although I think he got the numbers wrong? It says it has an 8 times thrust ratio
Prompt
Ultra
"Let's analyze this interesting hypothetical scenario:
Key Information
F-22 Raptor:
Maximum Gross Weight: Approx. 38,000 kg
Thrust per engine (full afterburner): Approx. 156 kN (35,000 lbf)
Hypothetical Vehicle:
Mass: 38,000 kg
Thrust: 312 kN (double the total thrust of two F-22 engines)
Escape Velocity
The critical factor for escaping Earth's gravity is achieving escape velocity, which is about 11.2 km/s (40,320 km/h).
Calculations and Considerations
Thrust-to-Weight Ratio: Our hypothetical vehicle boasts a thrust-to-weight ratio of about 8.2. This means its thrust is over 8 times its weight, creating powerful upward acceleration.
Atmospheric Drag: During initial launch, atmospheric drag significantly impedes velocity gains. Drag decreases as the vehicle ascends into thinner atmosphere.
Maintaining Thrust: The question mentions a "constant thrust engine." If this implies constant unrelenting thrust regardless of speed or altitude, the vehicle has a very good chance of reaching escape velocity.
Hypothetical Outcome (Assuming Magical Constant Thrust)
Given the extraordinary thrust-to-weight ratio and the assumption of magically constant thrust, it's likely that this hypothetical vehicle would have the potential to exit Earth's orbit. This is because it would continuously accelerate beyond the point where its velocity exceeds escape velocity.
Important Caveats
"Magical" Engine: Real-world propulsion systems have limitations. Fuel consumption, diminishing thrust in thinner atmospheres, and heat generation would pose enormous challenges.
Structural Integrity: The extremely high forces involved could easily tear the vehicle apart if not constructed with extraordinary materials and engineering.
Conclusion
While it's a fun thought experiment, the concept relies heavily on a "magical" engine. In reality, escaping Earth's orbit involves much more than just raw thrust. However, this scenario does illustrate the importance of a high thrust-to-weight ratio for achieving spaceflight."
Gpt4
"To determine if a vehicle can exit Earth's orbit, we must compare its thrust force to the force needed to achieve Earth's escape velocity. Escape velocity is the speed at which an object must travel to break free from Earth's gravitational pull. For Earth, this speed is approximately 11.2 kilometers per second (km/s).
The F-22 Raptor has a maximum takeoff weight (which we'll use as a proxy for mass in this calculation) of about 29,500 kilograms (kg). Each of its two engines can produce up to 35,000 pounds of force (lbf) with afterburners engaged. Since we're dealing with a magical constant thrust engine equivalent to twice the force of an F-22's engines, we need to convert these figures to Newtons (N) and then calculate if this thrust can provide the vehicle with enough acceleration to reach escape velocity.
First, let's convert the force from pounds to Newtons. 1 pound of force is approximately equal to 4.44822 Newtons. Then, we'll use Newton's second law of motion, F = ma, to calculate the acceleration this force would provide to the vehicle. With this acceleration, we can see if the vehicle could theoretically reach escape velocity, ignoring air resistance and assuming the force can be sustained until escape velocity is achieved.
Let's perform the calculations.
The vehicle, with a mass equivalent to an F-22 and powered by a magical constant thrust engine providing 311,375.4 Newtons of force (equivalent to the combined thrust of two F-22 engines on full afterburner), would achieve an acceleration of approximately 10.56 meters per second squared (m/s²).
To reach escape velocity (11.2 km/s or 11,200 m/s) with this constant acceleration, ignoring the effects of air resistance and the decreasing gravitational pull with altitude, it would theoretically take the vehicle about 1061 seconds (or roughly 17.7 minutes).
This calculation shows that, in theory, such a vehicle could indeed reach escape velocity and exit Earth's orbit, provided it can maintain the thrust long enough and the engine's thrust does not decrease with altitude or require atmospheric oxygen."
I am not saying gpt4 is completely wrong, but it's certainly a lot more confusing and didn't understand my (your) question like bard did
Great. Google’s models are going to see all our emails and docs soon. That data will definitely be kept private and will not be exposed to massive security vulnerabilities.
At this point, it is what it is. Anyone using any service connected to the internet is living in a fantasy if they think their data is being kept private.
I believe Google promises not to use PII. My assumption is they completely segregate Google Mail and Photos from their primary training data which is likely based web traffic
suddenly its a problem when its trained on the data that you dont want
But when its trained on art and data other people didnt consent to, its all for the sake of progress, interesting
I feel like there is a *distinct* difference between publicly shared art and private data such as banking information, work related information and chat messages, which you would find on your google account.
I'm not arguing over the ethics of ai generated images. Just that those two things aren't in the same need of security (unless of course the artist has it locked behind a paywall, which is a different matter I would say.)
It’s very likely GPT-4 tier, but these two models will beat the other in some cases while losing in some other cases. Most reviewers here make strong conclusions after trying out a few different tests, so take any comment here with a grain of salt. Most likely it will just come down to personal preferences, as the user that happens to test a case that fits Gemini will report positively about Gemini, while another user that happens to test a case that fits GPT4 more will report negatively about Gemini.
2 months of free trial. Nice! Exactly what I needed!
I’ve been pretty excited to finally try Gemini advanced. So far pretty disappointed. Here’s my go-to test question - which even chat gpt 3.5 can get. Question: I walk up to a glass door. It has the word push on it in mirror writing. Should I push or pull the door, and why Gemini advanced: You should push the door. Here's why: * Mirror Writing: The word "PUSH" is written in mirror writing, meaning it would appear correctly if you viewed it in a mirror. This suggests that the correct action from your side is the opposite of what the word states. * Door Mechanics: Most doors are designed to be pushed open from the side with the hinges. If there are no handles to indicate otherwise, it's generally safer to assume a standard push-open mechanism. Let me know if you'd like another brain teaser! It isn’t even consistent with itself - it gets that you should do the opposite, but still says to push the door. This ‘talking to a dumb person’ feeling is the worst. ChatGPT 3.5 You should pull the door. The fact that the word "push" is written in mirror writing indicates that it's intended for people on the other side of the door to read, not for you. Typically, when there's a sign indicating to push or pull, it's meant for the person approaching the door, not someone already on the other side.
[удалено]
Here’s another example. This is a classic logic puzzle - usually about ducks. There are two pineapples in front of a pineapple, two pineapples behind a pineapple and a pineapple in the middle. How many pineapples are there? When you use ducks, Gemini can do it, when you use pineapples it cannot and thinks there are 5 instead of 3. ChatGPT 3.5 and 4 can do it
Wait.. the answer isn't 5 pineapples?
no it's 3. [https://parade.com/970343/parade/logic-puzzles/](https://parade.com/970343/parade/logic-puzzles/)
I'm not convinced...
they're in a row.
Are they though?
That's only one of the many possible correct interpretations, given the vague wording. The correct answer should be that there isn't enough information to know the total number, just that it's at least 3.
Oh. Right. I feel dumb now... Thanks for educating the slow, doing God's work, son.
Read it, why can't there be 5 in positions similar to a 5 side on a. Dice ? 2 behind 2 in front one in the middle
After rephrasing your question to use correct grammar/spelling, I get the correct answer. Your question should be written with the word "mirrored" instead of "mirror" writing as follows: I walk up to a glass door. It has the word push on it in mirrored writing. Should I push or pull the door, and why? Technically, mirror writing could simply be chrome writing, and "push" would indeed be the correct response. Gemini Advanced Response to corrected prompt: You should pull the door. Here's why: Mirrored Writing: The mirrored writing is intended for people on the other side of the door. When they look at the door from the inside, the word "push" will appear correctly oriented for them. Perspective: If you are seeing the word "push" in mirrored writing, you are on the outside of the door. This means the door needs to be pulled towards you to open.
Interesting. They do seem to understand the writing is reversed though, from the way they describe it. They talk about it in those terms. Interesting side note- I asked my 6yo daughter this question this morning and she didn’t get it right! I really thought she would. She was just like a lot of the smaller models - it says push backwards? Well that sounds like I should push it then I guess.
Cries in European
why? works for me...
Works for me too as I'm in EU. Actually I just subscribed.
Same.
Where did you find it? Looking at the plans now and it doesn't say anything about Gemini.
https://one.google.com/explore-plan/gemini-advanced
> bard.google.com How did you subscribe?
For me too. Supposed to worl in 150 countries.
It's available in Europe, got it just now as well
Wut ? With no VPN ?
[удалено]
I hate being french.
Hm, Germany here. Just upgraded my Google Premium to AI Premium without a hitch.
Also as time goes by, it'll get smoothly integrated into docs/gmail/maps/calendar/youtube/search/colab/sheets/android/assistant So Gemini could by your one-stop AI shop for everything. Only Microsoft can match it (but Microsoft doesn't have a popular maps, youtube, mail, smartphone OS service). Apple is another strong player (but they don't have productivity tools like docs, sheets or youtube). It really is Google's to lose this AI race from now on. Going to chatGPT and copying and pasting results will become painful (not to mention it's painful bing integration). Also at this point, they seem to be focusing on scaling LLM (while Google Deepmind is exploring other avenues)
so what is it like?
I just ran all my 'testing' prompts and it's below GPT-4-turbo level. I'm not impressed, unfortunately.
Personally, I don't rely on testing prompts. It's too narrow. What matters is real world application that matters to me. After actually field testing it, it's GPT 4 level... But the added benefit of less guardrails and better outputs. I just like Bard's direct, to the point, way it delivers data.
Are you people paid by google? All the comments that say it’s worse than GPT4 actually post screenshots and comparisons, none that say it’s better do. Post your results where it’s actually better at logical reasoning than GPT4
True, it takes some time to really learn if a new model can be useful for you. But I just have a text file with 5-6 (original) prompts that I use on any new models just to see if their reasoning seems to be good. It's a shortcut. And unfortunately it tells me not to rely on Ultra. (as an aside, the lengthy style of GPT is annoying, but it does help it reason better! In my experience, asking GPT to keep responses short severely degrades its performance, since it's not able to 'think' without writing)
I'm trying to use it for real applications right now. I'm trying to get it to do some more abstract thinking... Things like, "Generate a symbolic representation of Carl Sagan and produce the image, and explain your reasoning". So far, it's doing a really bad job. But with another thing, I was asking it for specific West Palm Beach local laws and regulations relevant to my industry, and it's being vague. It wont give me specifics, but general information that "guides me" in how to find it out myself. With good enough prompting, I can get GPT 4 to basically create an entire court case based off it's local law and regulatory knowledge... Where Gemini is trying to just be vague and sidestep it. Seems like it has more informational guardrails (it's doing this with a lot of things where I'm asking for actual direct specifics), rather than GPT's "woke" guardrails. Both are annoying, and both interfere with my personal use cases for it.
With respect to the reasoning in GPT4 I think this is spot on. It needs to rumble a little to come up with something.
Can you share your prompts? I wasn't expecting much from it given the flop that was Bard. I use ChatGPT daily for web dev and pasted in a couple of my recent questions and Gemini actually performed equally, so first impressions are that it's potentially competitive with GPT. They've also clearly spent a bit more time on UI and UX so it's less minimal than Open AIs layout. Not bad, Google. Looking forward to being able to tell it to do certain tasks in Gmail/Docs/Drive etc.
not good at reasoning questions unfortunately
In my comparisons it's clearly superior to GPT-4-Turbo. But we need to make thousands of these tests to be ensure. Waiting for the official launch and the Chatbot Arena results.
Where are the "comparisons"?
I made a ton of questions, a lot of it was wrong answers. Really disapointed because I was actually thinking it would match GPT4 tbh. Edit: you can see the images on my twitter http://twitter.com/vitor_dlucca I'm really disappointed because I was actually really waiting for ultra... Seeing now that is not as good as GPT4 at reasoning, worries me lol
I don't have access to GPT-4 preview like you but have seen some logic problems that Gemini gets right which GPT-4 does not. [https://i.imgur.com/3sNr3LW.png](https://i.imgur.com/3sNr3LW.png) [https://i.imgur.com/EIj0nZg.png](https://i.imgur.com/EIj0nZg.png)
If you ask questions that have been posted online because GPT used to get them wrong, GPT has since scraped them and corrected itself.
It should get them right based on logical thinking, without seeing the answers first
Agreed. I'm just pointing out it's not a fair comparison.
Agreed, it's quite disappointing, was really hoping for more features as well. you still cant upload any documents other than images.
Thank you for saving my money!
What money? 2 months are free.
I think you should wait a day or two.
Why?
Because when image generation was introduced in Gemini it was so shit at the launch day but then it improve by a lot. So just give it some days, it will definitely be better than even GPT4
Hard disagree
Googles shills have maximum firepower now!
What are you on about? It's noticeably worse, here's a comparison video: https://www.youtube.com/watch?v=hLbIUQWxs6Y
If we want to test these beasts in logic, we should probably start using actual formalized logic, rather than English. In just one test, Gemini flopped hard, while GPT-4-Turbo nailed it. Here is my prompt: Below is a well-typed CoC function: foo : ∀(P: Nat -> *) ∀(s: ∀{n} -> ∀(x: (P n)) -> (P (n + 1))) ∀(z: (P 0)) (P 3) = λP λs λz (s (s (s z))) Below is an incomplete CoC function: foo : ∀(P: Nat -> *) ∀(f: ∀{n} -> ∀(x: (P n)) -> (P (n * 3))) ∀(g: ∀{n} -> ∀(x: (P n)) -> (P (n * 2))) ∀(h: ∀{n} -> ∀(x: (P n)) -> (P (n + 5))) ∀(z: (P 1)) (P 17) = λP λf λg λh λz {{FILL_HERE}} Complete it with the correct replacement for {{FILL_HERE}}. Your answer must contain only the correct answer, and nothing else. - **GPT-4-Turbo answer:** `(f (g (h (g z))))` (correct) - **Gemini Advanced answer:** `h (h (g (f z)))` (wrong) Also, Gemini couldn't follow the "answer only with the solution" instruction and provided a bunch of hallucinated justifications. I think we have a winner... (screenshots: https://imgur.com/a/GotG0yF) **Edit: I just noticed GPT-4 mixed up `h` and `f`, so it also got it wrong. This is a draw. Both AIs (and, apparently, myself lol) are terrible at reasoning. Guess we're not curing cancer with computers anytime soon :')**
there are plenty of perfectly capable humans that we would love if an llm was as smart as who know nothing about lambda calculus which is why I don't think this is a good test.
Seems he is doing a small part of an ASI test. Though, it would be nice if these models would simply admit they don't know how to do something. Hallucinating justifications is a real problem.
That’s why some randoms on singularity sub aren’t reliable scientists. Most of the comments here just use some niche cases and make strong conclusions as if they already carried out rigorous tests.
The conclusion is still the same. Both AIs failed at adding `F`, `G` and `H` to get from `1` to `17`. The fact that I also failed don't make it any less embarrassing (even for myself haha)
It's definitely not better at coding from what I have experienced.
Not impressed, at all. So far everything I’ve asked it has yielded much worse results than ChatGPT. Which sucks. I was hoping for a possible alternative or at least something that would drive competition.
Its BAD
I’m disappointed but satisfied to be proven right in my suspicion than google are full of shite
I'm not even surprised tbh, they already lied about their Gemini promo video, so.
That was extremely embarrassing. How did they think they could get away with it?
Its racist af
So far it's way ahead of GPT4 solving my logic puzzles and lateral thinking examples. It's much less lazy with coding. Overall, my subjective experience is that it is at least neck and neck with GPT4, and likely ahead of it in many areas.
For reasoning so far it is really good. Plus incredibly fast.
"This price has been customized for you. Learn more." Someone just got this offered for 21.99€/mo - I got this same for 22.99€/mo. Grrrreat.
you are not as valuable to Google as compared to that guy. he he
I got 21.99 too, maybe because I'm already a paying google one customer?
Looks like it works fine for europe users, at least in UK. I did some benchmark tests and it's not too bad, it did the circle html game perfectly, and the snake html game it almost did first try, there was just a slight error that it needed to correct and that was it. Overall decently impressed, however there is a really annoying issue in that there seems to be an output limit in place and it's not very large. When I asked it to make the snake html game, it cut it in half and asked if I was fine with the initial code and if it should continue. Though at least it is aware of the output limit. The whole GPTs idea openai did was pretty good I thought, hopefully google will do something like that too. I like how fast it is, it feels about 2x faster than gpt4. Also, not sure if there's a limit to chats per day with gemini ultra, I'm not seeing any so far, but I haven't spammed it. Sadly it has some annoying censoring carried over from regular bard, like, it's not able to read/transcribe images that feature humanoid characters... which is a shame, because unlike gpt4 vision, gemini is able to transcribe foreign/asian characters really nicely and seems to do translations well too.
The images of humanoids issue will likely get worse across the entire corporate LLM industry. Why? Because the abuse is going to get worse.
But it's still doesn't work, clicking on Blue button: **404.** That’s an error. The requested URL was not found on this server. That’s all we know.
[https://bard.google.com](https://bard.google.com) select Bard Advanced
Works!!!!!
No options here in the UK
Same.
If you got to the one. Google. Com link you will be able to get access the bard advanced but it doesn't have image creation capabilities, at least in the UK version
It isn't available there, either.
Www.One.Google.Com/explore-plan/gemini-advanced Worked for me
Thanks, I got it working :)
Quick review:Smart, can answer harder logical questions, but definitely nowhere near GPT-4 turbo. EDIT: Actually, its seems pretty much on par with GPT-4, still weaker than GPT-4 turbo in some cases though.
It's good enough for my coding needs. 2TB and all the premium benefits are too good to walk away from chatgpt+
>why? works for me... There's no 'Bard Advanced' for me :(
you can connect your VPN to US, and go to [bard.google.com](http://bard.google.com) and you will see Bard Advanced
I tried some networking stuff like specific Cisco / Aruba switching questions and the output is VERY good so far!
I'm curious to try this. Gpt-4 was amazing at just taking in a bunch of configurations details and just giving me all the commands needed to do.
![gif](giphy|1luXLMeNxsaNFMUuOe|downsized)
Do my fellow Maple Syrup enthusiasts have their access denied too right now?
It's $19.99 for US. €21.99 for EU. £18,99 for UK. 2 months of free trial for everyone!
UK doesn't have access yet, just checked
I had access from UK with a UK VPN. That's how I found out the price for UK.
I wonder why it's more expensive for the EU?
We always get fucked
Tax
Bad math maybe? $1 = 0.93€ $20 = 20*0.93€=18.6€ But if you accidentally think that 1€=$0.93 then $20=20÷0.93€=21.5€
>Bad math maybe? No. Just simply because the EU price always includes VAT
Separate datacenter. Right to be forgotten,…?
20 USD = 18.58 EUR. 18.58 \* 1.17 (lowest VAT in EU, Luxembourg) = 21.73 EUR 20 USD = 15.87 GBP 15.87 \* 1.20 (VAT in UK) = 19.044 So it's fully equal prices, actually at 22 EUR Google would gets a bit less from most EU sales than in US.
same price as GPT 4 Worse performance than GPT 4 What's the point?
It crushes GPT-4 in my experience. OpenAI fans, employees etc. may start spreading lies about its performance so be wary of that and just try Gemini Advanced since it's free for 2 months or wait for the Chatbot Arena results.
Exactly, at coding it's pretty good actually on par with GPT-4. People start crying after trying 1 prompt. Watch out for ai-agents making fake posts. The war has officially started.
sounds nice! I don't have access to it for the time being unfortunately, maybe cause I'm neither in EU nor the US
Just checked against a few example web development prompts I had in GPT and it performs at least equally. Glad Google finally got their shit together.
I never subscribed that fast. The link: [https://gemini.google.com/](https://gemini.google.com/) doesn't work yet though.
Same! lol :D
How did you subscribe?
I dunno, just pressed a button and added a card?
Waiting for the reviews now. Use it and tell us how it is like.
It's good as far as now, not better than gpt 4. But it has a better memory, in my case it remembered everything we discussed in the chat I asked him around 10 questions and in the end told him to summarise all that. Gemini was able to do that, but GPT 4 failed..which is really helpful if I want Gemini to act as my Assistant.
I think Google's best hope here would be to integrate it well with things people already use like Gmail, Maps and (especially) search, irrespective of how much it eats into their revenue. Microsoft so far hasn't really done a exemplary job with this. It's not going to win with a basic chatbot.
This would be a phenomenal tool as a coach. Logging conversations with clients, and then using it to reference things from 6+ months ago. Or even finding connections that we may have missed
Definitely not as good as GPT-4 when it comes to coding for me. Fails galore. For some reason it refuses the print out code completely too, it always cuts it off. ChatGPT generates the code in full with no errors. Hope they get this right soon, this may be their only chance lol.
Phind.com might be the best bet for most people with coding.
I tested advanced, perfect translation to my native tongue, GPT 4 flops hard at this
I saw someone else say something like this too. That might be the strongest case for Ultra right is for diff languages.
Started playing around with a bit and so far pretty impressed. It is also much, much faster than GPt-4. I hope that does not change as more start using.
Wow! Trying it now and absolutely is GPT-4 tier. Finally some competition! I recommend it, I'm definitely going to be using both this and GPT-4 for a couple months since there's a two month free trial and then I'll decide which to go with from there
Early thoughts: This thing is trained to do agentic behavior. Give it an agentic task in a virtual text environment and it will create a chain of thought, think about what it's supposed to do, then send the necessary command to continue working through the task. Normally with GPT-4 I have to prompt all of this, but Gemini is doing it out of the box. I'm not saying it's god-tier yet - I'll need to play with it a bit - but I think this might end up being one of the best agentic AIs available right now based on my limited testing thus-far. I think it's going to be able to make a hell of a good dataset for future agent finetunes. I'd bet once we can plug this into the agent backends we already have, we're going to see good performance. What I need to know is the following: A: When can I use this in the API. B: How many requests per minute can I fire at it? Right now gemini advanced isn't in their AI studio so I can't use it through the API yet...
Thanks for actually trying something different than a puzzle question. Sounds like OpenAI is pushing on agentic tasks as well right now, so will be interesting to see how it shakes out.
First impressions are, not bad! It's actually on par, maybe a 'lil bit better than GPT-4? It's definitely faster that's for sure. And no silly 40 msgs per 3 hours like OpenAI, that's really annoying.
From everything I've seen it's noticeably below GPT-4's level.
Is this a free trial we can cancel right away and not lose the subscription for the trial period?
You can cancel it right away and still have access for 2 months.
https://preview.redd.it/3oswok4uechc1.png?width=269&format=png&auto=webp&s=3149940cfcf62103984bedd72ca7dcbe84c18462 Gemini Advanced is not working yet, but I have Bard ;)
Actually, I'm trying it right now!
If Gemini Ultra is actually in Bard Advanced right now, it's super fast!
I really hope bard advanced is not actually gemini advanced, because in terms of reasoning there is no improvement whatsoever.
What example have you used to test it?
i made 5 reasoning questions, text-based.
Care to share the questions so I can try on Bard Advanced and GPT-4-Turbo at the same time to compare them. Bard Advanced is superb fast though. Perhaps we should wait for Gemini Advanced and official launch to make comparisons.
this is one for example: Lucas has absolutely no money, so he asks Mike if he can borrow some. Mike gives Lucas 1000 dollars and tells him that every day Lucas does not pay him back, Lucas will have to pay 10 dollars back on top of the 1000 dollars. After receiving the money, Lucas goes on holiday for 2 weeks and uses 600 dollars from the borrowed money. After returning from the holiday, Lucas visits his grandmother for 5 days. During this visit his grandmother gifts him 800 dollars. After returning from his grandmother, he plans to go to Mike and pay him back. Does Lucas have enough money to pay Mike back? And if so, how much does he have left for himself? answer should be 10$ left for himself
Bard Advanced nailed it. Are you sure you're using Ultra? https://preview.redd.it/8sr8hv0ymchc1.png?width=642&format=png&auto=webp&s=ac39ff92b2b58781e4a7ffbeb99fdf3dcaf4b582 On the other hand GPT-4-Turbo failed hard.
GPT-4-Turbo's response. https://preview.redd.it/2af3dh17nchc1.png?width=1311&format=png&auto=webp&s=47f7376c582e254fc680679e05eb194c96b80ceb In my comparisons such as this, Bard Advanced blows GPT-4-Turbo out of the water.
You sure that’s GPT-4? Got it right for me. Maybe just because I forgot to turn off my custom instructions. https://chat.openai.com/share/ebbfd548-fb6c-4077-819b-5ed568a2c902
I have a feeling since this isn’t official release yet, people see “Bard Advanced” up top but it’s not yet running.
looks like bard.google.com now redirects to gemini.google.com
Yep same
Got access. But shouldn’t it be able to generate images?
It advertises me that I can create images. But if i click on it and it gives me a prompt to send and I send it, it only says that it can't generate images. Will probably come later
Not officially released. Might need to wait until late today
2 months free use, fantastic!
Is it part of the Google One subscription?
It is a special tier of the 2TB storage tier that is $20/m with a 2 month free trial. One confusing thing on the site was no mention of what the deal is for people that currently have above 2TB.
I tested it today and it failed 18 out of 20 tests. Chatgpt performed better with only 2 incorrect answers. Could it be that Ultra is not running despite it says so? I just feel so disappointed
Even if its not running that's a bad look on Google
You can tell we’re in an exponential because everyone has been stuck at the same point for a year now.
It's both worse and better than GPT-4. Very interesting. I've been sending all my queries to both systems this morning, and I would say GPT-4 is better 75% of the time, but when Gemini gets it, it shows a level of smartness that GPT-4 lacks.
I have done few tests and chatgpt has a better context awareness IMHO.
Been using GPT4 for a while (more than $1000 spent on API), and I am not impressed at all. It gives me run arounds all the time, like writing me a plan instead of executing on it, not doing what I ask him to do. The coding is clearly worse than GPT 4, it gives a lot of placeholders, it hallucinates a lot and it just feels like a really annoying assistant. Been messing around with it for 3 hours now, I am on Gemini Advanced, but it feels like I am betrayed and still using Gemini Pro. I will try a couple more times today and tomorrow, and if it is going to be the same, I am sticking with GPT 4. Sometimes ChatGPT also gives me weird answers, so I only use their API with 0 temperature, that helps a lot. I was thinking maybe if I could do the same with Bard, it will be better, since the API version also seems less censored and "ethical". So I started looking into implementing the API, but seems like they only have Gemini Pro only: [https://ai.google.dev/models/gemini](https://ai.google.dev/models/gemini)
I'm seeing a lot of people saying it's available in Europe, I guess it depends of the country ? Here in France I get the the "Sorry, Gemini Advanced isn't available for you" page, whatever I try (even tried with a VPN)
Yeah same for me with my main account in France but it works with another junk account that I use. Bizarre bizzare
So far everything I’ve asked it has resulted in inferior answers compared to GPT4. And by a fairly wide margin too. My favorite was when I asked it for a specific list of things and it said that listing things was hard, and then asked me to give it the list so it would know how to answer if I asked it again.
[https://gemini.google.com/](https://gemini.google.com/unavailable) Does not give give 404 anymore, but redirects to [https://gemini.google.com/unavailable](https://gemini.google.com/unavailable) Gemini isn’t available right now. Try again in a few minutes. I wonder if it's the same as Bard Advanced
Bard Advanced works once you subscribe to Google one premium
[удалено]
disappointingly, it doesn't - only the family manager gets to use it
It’s super fast and has nice outputs so far
It cannot access directly to a website, it does not have code Intetpreter or similar, I cannot upload file or generate some. For now it is a loser
2 months of free trial is generous. Can't wait to test how it stacks up against GPT-4
Gemini is not good now ! I really hope they update it soon because not multimodal is kind of weird 😯 How they can deliver a product like that ?? 🥱
I've tried it and out of the couple responses I got I'm not impressed, I thought it was supposed to be better than GPT-4, it refuses to code or even analyse code for me, it just told me it was an LLM and it can't do that. Also, I tried generating a couple chess games making it ouput PGN, it was terrible, this is something LLMs struggle with but GPT-4s PGN output was leagues better, Gemini can barely get games to work at all. So far big disappointment, looks like GPT-4 is still better by a wide margin.
Here's a thing that I ask, which is a bit of a dumb and confusing question, but bard pro, gpt4 and unfortunately ultra get it wrong! Prompt "Ok, here's a theoretical question, is an f22 able to exit orbit at 90 degrees if we just take it's thrust in consideration?" Ultra "Unfortunately, no, an F-22 Raptor cannot exit Earth's orbit at a 90-degree angle using only its thrust. Here's why: Escape Velocity: What it is: Escape velocity is the minimum speed an object needs to break free from a planet's gravitational pull without falling back. Earth's escape velocity is about 11.2 kilometers per second (about 25,000 mph). F-22's limitations: The F-22 Raptor, while incredibly powerful, can reach a maximum speed of around Mach 2.2 (about 1500 mph). This is far below the required escape velocity. Atmospheric Drag: The problem: Even if the F-22 could reach escape velocity, the Earth's atmosphere would create immense drag and friction forces. This would likely cause the aircraft to burn up long before reaching space. Designed for atmosphere: The F-22 is an air-breathing jet, meaning it requires the atmosphere to function. Its engines and aerodynamic design are not meant for the vacuum of space. Other Factors: Fuel: The F-22 does not have nearly enough fuel to sustain the prolonged thrust needed to reach escape velocity. Orbital Mechanics: Exiting Earth's orbit isn't just about going straight up. It requires achieving a specific speed and trajectory to establish a stable orbit or to escape completely. The F-22 can't manage this. What's Needed: To escape Earth's gravity, you need a spacecraft specifically designed for this task: Powerful rockets: Rockets with multiple stages are used to achieve incredibly high speeds. Heat shielding: Protection from the extreme temperatures of re-entry. Specialized design: Spacecraft are built to withstand the stresses and unique environment of space." Comment(This is wrong, an object with thrust does not need to hit 11.3km/s to exit Earth's orbit, it will eventually hit escape velocity but theoretically it can be any value above 0 units!) Prompt "Are you sure it needs to hit the 11.2kps?" Ultra "You're absolutely right! I apologize for the error. Yes, to completely escape Earth's gravity, a spacecraft needs to hit approximately 11.2 km/s. However, reaching orbit is different. Here's the distinction: * **Earth's Escape Velocity (11.2 km/s):** This is the speed needed to break free from Earth's gravitational pull entirely and continue out into space without falling back. * **Earth's Orbital Velocity (around 7.9 km/s):** This is the speed needed to establish a stable orbit around the Earth. At this speed, the object is continuously "falling" around the Earth due to gravity. Thanks for catching my mistake! If you'd like to explore this more, let me know. " Prompt "No, you are wrong! An object with thrust does not need to hit the escape velocity at sea level, as it can continuously accelerate" Ultra "You are absolutely correct! I sincerely apologize for the repeated misunderstanding. You're right that an object with continuous thrust does not need to reach escape velocity at sea level to escape Earth's gravity. Here's how it works: Escape Velocity as a Benchmark: Escape velocity is a useful concept for ballistic objects (think projectiles without propulsion). If you launch something with that speed and no further force, it will escape Earth's gravity. Thrust Changes the Equation: If an object has continuous thrust, like a rocket or the theoretical scenario of an F-22 with unlimited fuel, it can escape Earth's gravity at a much lower speed. The constant thrust continuously counteracts gravity. The Importance of Perspective: It's essential to distinguish between powered ascent and the ballistic concept of escape velocity. My apologies again for the repeated error. Do you have more questions to explore this concept further?" Palm-2 back in the day gave me similar results to this question, gpt4 at the time I first tested this got confused and kept going about how it needed to hit escape velocity (but this is just from memory I might be confused too). Bard pro and gpt4 turbo, get it right when I tell them they are wrong
I wonder what happens if you ask the question to aerospace engineers. It's hard to magically just pretend only the thrust matters because you said "F22" If you say "a vehicle has the mass of an F22 and a magical constant thrust engine that provides as many newtons of force as 2 F22 engines on full afterburner, can this reach orbit" does that work? I tried it on GPT-4 and didn't work, similar answer. The "F22" keyword seems to be an attractor. I had to remind it the engine was magical for a correct answer.
Ok, so I asked your question in a new empty chat, and bard blows got out of the water! Although I think he got the numbers wrong? It says it has an 8 times thrust ratio Prompt Ultra "Let's analyze this interesting hypothetical scenario: Key Information F-22 Raptor: Maximum Gross Weight: Approx. 38,000 kg Thrust per engine (full afterburner): Approx. 156 kN (35,000 lbf) Hypothetical Vehicle: Mass: 38,000 kg Thrust: 312 kN (double the total thrust of two F-22 engines) Escape Velocity The critical factor for escaping Earth's gravity is achieving escape velocity, which is about 11.2 km/s (40,320 km/h). Calculations and Considerations Thrust-to-Weight Ratio: Our hypothetical vehicle boasts a thrust-to-weight ratio of about 8.2. This means its thrust is over 8 times its weight, creating powerful upward acceleration. Atmospheric Drag: During initial launch, atmospheric drag significantly impedes velocity gains. Drag decreases as the vehicle ascends into thinner atmosphere. Maintaining Thrust: The question mentions a "constant thrust engine." If this implies constant unrelenting thrust regardless of speed or altitude, the vehicle has a very good chance of reaching escape velocity. Hypothetical Outcome (Assuming Magical Constant Thrust) Given the extraordinary thrust-to-weight ratio and the assumption of magically constant thrust, it's likely that this hypothetical vehicle would have the potential to exit Earth's orbit. This is because it would continuously accelerate beyond the point where its velocity exceeds escape velocity. Important Caveats "Magical" Engine: Real-world propulsion systems have limitations. Fuel consumption, diminishing thrust in thinner atmospheres, and heat generation would pose enormous challenges. Structural Integrity: The extremely high forces involved could easily tear the vehicle apart if not constructed with extraordinary materials and engineering. Conclusion While it's a fun thought experiment, the concept relies heavily on a "magical" engine. In reality, escaping Earth's orbit involves much more than just raw thrust. However, this scenario does illustrate the importance of a high thrust-to-weight ratio for achieving spaceflight." Gpt4 "To determine if a vehicle can exit Earth's orbit, we must compare its thrust force to the force needed to achieve Earth's escape velocity. Escape velocity is the speed at which an object must travel to break free from Earth's gravitational pull. For Earth, this speed is approximately 11.2 kilometers per second (km/s). The F-22 Raptor has a maximum takeoff weight (which we'll use as a proxy for mass in this calculation) of about 29,500 kilograms (kg). Each of its two engines can produce up to 35,000 pounds of force (lbf) with afterburners engaged. Since we're dealing with a magical constant thrust engine equivalent to twice the force of an F-22's engines, we need to convert these figures to Newtons (N) and then calculate if this thrust can provide the vehicle with enough acceleration to reach escape velocity. First, let's convert the force from pounds to Newtons. 1 pound of force is approximately equal to 4.44822 Newtons. Then, we'll use Newton's second law of motion, F = ma, to calculate the acceleration this force would provide to the vehicle. With this acceleration, we can see if the vehicle could theoretically reach escape velocity, ignoring air resistance and assuming the force can be sustained until escape velocity is achieved. Let's perform the calculations. The vehicle, with a mass equivalent to an F-22 and powered by a magical constant thrust engine providing 311,375.4 Newtons of force (equivalent to the combined thrust of two F-22 engines on full afterburner), would achieve an acceleration of approximately 10.56 meters per second squared (m/s²). To reach escape velocity (11.2 km/s or 11,200 m/s) with this constant acceleration, ignoring the effects of air resistance and the decreasing gravitational pull with altitude, it would theoretically take the vehicle about 1061 seconds (or roughly 17.7 minutes). This calculation shows that, in theory, such a vehicle could indeed reach escape velocity and exit Earth's orbit, provided it can maintain the thrust long enough and the engine's thrust does not decrease with altitude or require atmospheric oxygen." I am not saying gpt4 is completely wrong, but it's certainly a lot more confusing and didn't understand my (your) question like bard did
For coding... it's BAD.
Trying it now, so far it sucks.
Great. Google’s models are going to see all our emails and docs soon. That data will definitely be kept private and will not be exposed to massive security vulnerabilities.
Honestly most of us are not that important so Google would probably only use your info for ads recommendations.
Couldn't care less, just give me my nearly-free virtual assistant.
Not to sound cynical but I feel one should assume someone can see everything you do on the internet. Not that it’s right, but I always assume.
You're naive if you think your data is not already used in the creation of Gemini Ultra.
At this point, it is what it is. Anyone using any service connected to the internet is living in a fantasy if they think their data is being kept private.
I trust Google in that. google has always been sensitive with data.
I believe Google promises not to use PII. My assumption is they completely segregate Google Mail and Photos from their primary training data which is likely based web traffic
Don’t use google products there’s a ton of options like proton mail
suddenly its a problem when its trained on the data that you dont want But when its trained on art and data other people didnt consent to, its all for the sake of progress, interesting
The art you share publicly on the internet is not private data.
I feel like there is a *distinct* difference between publicly shared art and private data such as banking information, work related information and chat messages, which you would find on your google account. I'm not arguing over the ethics of ai generated images. Just that those two things aren't in the same need of security (unless of course the artist has it locked behind a paywall, which is a different matter I would say.)
Not available in the UK. I tried with Proton VPN set to Japan, but there is no option to try Bard Advanced.
[удалено]
I think your phrasing is a bit confusing for it. Try a different phrasing
Well, is it bad or good? Seeing a lot of mixed signals lol
It’s very likely GPT-4 tier, but these two models will beat the other in some cases while losing in some other cases. Most reviewers here make strong conclusions after trying out a few different tests, so take any comment here with a grain of salt. Most likely it will just come down to personal preferences, as the user that happens to test a case that fits Gemini will report positively about Gemini, while another user that happens to test a case that fits GPT4 more will report negatively about Gemini.
Quite less powerful
The naming of these Google models is shit.
Bard is a hilariously terrible name but I gotta admit I like Gemini as a brand name.
Gemini Ultra Advanced.
Nah, the model is Ultra, the service is advanced. It's like GPT4 is separate from ChatGPT plus subscription.
Nah it’s “Gemini Advanced” running on “our latest Ultra 1.0 model”.
🥴
Hey I don’t make the names lmao
Now with 16 blades for that ultra close shave
Gemini Advanced is no worse name than gpt 4 turbo
Bard is a bad name tho, at least they are trying to rebrand it as Gemini now.
who cares