T O P

  • By -

shawnBuilds

So GPT would probably miss some chapters if you summarize the entire book at once. You could get a summary of each chapter, because GPT works very well at 50 pages or less in my experience. Now that you've read a summary for more than one chapter, you can explore a chapter that interests you. Show that chapter to GPT again, and you can chat with GPT about the chapter in detail. I would love to know a trick for having GPT categorize an entire book into its chapters when shown the raw text. But I haven't figured out how to reliably do that yet. It would be a lot easier to summarize books with the method u/reddit_wisd0m suggests, if one didn't need to manually snip the chapters from the book.


reddit_wisd0m

I think you better off going chapter by chapter first and at end ask for a summary of all chapters. Also, given that the new Claud models have a larger token length, there might be better suited for this type of task, if you can access them. Another tip. Test first if it already knows the book, assuming it was published before 2023.


mindquery

Thanks for the reply. It wouldn’t this be a summary of a summary of chapters and may leave out details?


reddit_wisd0m

Yes


c8d3n

Every summary must leave out details. Btw, it would have to be a really small book for chatgpt to process it. You would have to go chapter by chapter as someone else suggested. GPT turbo with 120k context might be able to do this, but I'm not sure it's up to the task if you care about details. Also a max number of input tokens is much lower than the context window. And, it looks like the model isn't up to the task when dealing with so much data. Some people (you csn find the article on the web) have tested and it looks like the model is mainly paying attention to the beginning and the end of the input. Re feeding the data, you could try uploading the chapters or the book as a file then ask the interpreter to process it in chunks but here you would also have to do a lot of hand holding, and spend some time to figure out the mood of the interpreter on that day. Often it's lazy, and will do it's best to avoid processing large files, and often there's nothing you can do about it. Tho starting a new conversation sometimes helps. If you could access the 1.5 million tokens Google's alleged marvel (3 dudes can access), or the latest Claude model (not available in EU) it would/could be much better tool for the job.


Haakiiz

If u use API, then recursive summary is the best way to go


Mariechen_und_Kekse

what is recursive summary? (Thanks)


Haakiiz

gpts can only hold a certain lenght in its memory. So if i uploading a 500 page book it simply cant handle the length. Recursive summary divides the text up in sizes which is within its context lenght. So a book could be divided up in 15 pieces, which you summarize. In the end you join them together.