Parker, how best to approach a prompt like we do a real (and common) work example -- give a precedent to a junior, ask them to work on it and update (often with more than just formatting updates... usually with substantive and context differences from the starting point), and then review a redline. Is this a legal prompt engineering question or a technological development that is yet to come (or both?)
That's an interesting one. My first thought would be to separate the substantive tasks into smaller, more "prompt friendly" tasks. Of course the downside to this approach is that you would lose some of the context when switching tasks and the document would likely be internally inconsistent.
Another option could be to train a chatbot to follow particular rules when drafting a particular document. For example, there are a lot of tools that can draft NDAs based on the user's preferences in past NDAs. But this approach falls short when drafting complex agreements where following a set of rules would not achieve the user's goals.
One technological limitation is the amount of context that the models allow. This is why it would be difficult to copy and paste the entire agreement into the chatbot, give it instructions, and produce an entire agreement--there simply is not enough room in today's models for something more than a couple pages long.
I will look into this more and write a post about it! Thank you for the comment!
Parker, how best to approach a prompt like we do a real (and common) work example -- give a precedent to a junior, ask them to work on it and update (often with more than just formatting updates... usually with substantive and context differences from the starting point), and then review a redline. Is this a legal prompt engineering question or a technological development that is yet to come (or both?)
That's an interesting one. My first thought would be to separate the substantive tasks into smaller, more "prompt friendly" tasks. Of course the downside to this approach is that you would lose some of the context when switching tasks and the document would likely be internally inconsistent.
Another option could be to train a chatbot to follow particular rules when drafting a particular document. For example, there are a lot of tools that can draft NDAs based on the user's preferences in past NDAs. But this approach falls short when drafting complex agreements where following a set of rules would not achieve the user's goals.
One technological limitation is the amount of context that the models allow. This is why it would be difficult to copy and paste the entire agreement into the chatbot, give it instructions, and produce an entire agreement--there simply is not enough room in today's models for something more than a couple pages long.
I will look into this more and write a post about it! Thank you for the comment!