I know this app has been out for almost 2 months now, but have you guys used it at all? Can’t imagine what this technology will look like even 5 years from now.
This thing can do anything from writing software code and solving math problems to creating cover letters and making stand up comedy jokes in a mere matter of seconds
I’ve been playing around with it for about a week. If used correctly, it can be a great tool for kids. Unfortunately, a lot of kids will use it to cheat. You can ask it for a 500 word book report on Romeo and Juliet written at a 9th grade level and you’re done.
I can’t wait for the IPO. Google is probably scared poopless.
I had a meeting with folks from
Florida and I asked it about the economy in Jacksonville and it said something to the effect of data is from 2019 so can’t give current commentary.
I also asked about banks managing net interest margin in a rising rate environment, and it gave what seemed like a pretty informed, thorough and succinct answer. If you have to lead / participate in a discussion on something, I think it would be useful to run it by chatgpt and make sure there’s not some things you’re forgetting to think about. Of course you don’t want to plagiarize but it can help you think of things and may be guide your contribution to the discussion.
I’ve started to use it to proof business communications such as emails and contract clauses. It’s come in useful a couple of times.
While it definitely has limitations, we’re looking at to how best incorporate it (the underlying GPT-3 model, not the conversational UI) into our AI delivery services. It’s not a game changer, but could very well be a step change.
It can’t source data past November 2021 if I’m not mistaken. However, once that happens then it will probably be a paid service by then in order to update it daily
Yes, lower level white collar workers won’t be needed nearly as much. There will still need to be a pipeline of them but the game will change significantly.
I think we’re a long way from daily updates for ChatGPT. Training it took over a month. Granted, from what I understand GPT-3 models scale and parallelize efficiently, but even assuming it can be perfectly parallelized that’s something like 32x its current architecture to get a new model trained daily.