What do you dislike about Descript?
After three years of repeated complaints and requests for improvement, Descript's AI engine still "goes rogue" with the same types of phrases. This is not unique to Descript. I've seen it with other products as well, and they all have a similar response, that the LLM model is always "learning" or "trying to learn" to improve the outcome, and there's no way to control the specific issues I've complained about and "teach" the AI engine to type exactly what it hears. But I don't buy it. Other responses have repeatedly suggested less-than-clear audio is to blame, which has *never* been the case. 
What adds to my angst about this behavior is it doesn't happen 100% of the time, but at least 50% of the time, and these particular issues are patterns that cannot be overcome with macros or other editorial clean-up automation because you do not know which are incorrect until you listen to the audio. It greatly slows down the proofing-to-audio process to have to correct these issues that are not "listening errors" but rogue AI decisions. 
In no particular order of importance:
"OF" WITH DATES:
Spoken: "January 1st of 2021" or "January 1 of 2021"
Sometimes transcribed: "January 1st, 2021" or "January 1, 2021" 
ORDINALS ADDED OR REMOVED:
Spoken: "January 1st, 2021"
Sometimes transcribed: "January 1, 2021"
Spoken: "January 1, 201"
Sometimes transcribed: "January 1st, 2021"
CONTRACTIONS:
Spoken: "I haven't heard a response."
Sometimes transcribed: "I have not heard a response."
Spoken: "I have not heard a response."
Sometimes transcribed: "I haven't heard a response." 
I have recently discovered another AI engine that transcribed these types of phrases with 100% accuracy, using the same audio that had been run through Descript with a 50% or more error rate. So it certainly IS possible to tweak the AI engine to transcribe exactly what is spoken and not what it thinks is better. 
Another issue I have is the limited usefulness of the Transcription Glossary feature. It does not allow the use of numbers, for starters. In my work, I get a lot of the same words or phrases that include numbers, such as Rule 404(b), which Descripts transcribes as 4 0 4 B or 4 0 4 b. It also seems arbitrary in how it interprets and applies words and phrases in the glossary, applying them some of the time and ignoring them other times when it is clear that it should have. For example, if I know the audio will include many references to "Joann," and I add that to the glossary, I might get 15 instances of "Joann" along with several instances of "Jo Ann" and "Jo Anne."
The big carrot that keeps me tied to Descript is the ability to remove duplicate words. It never removes all of them, but it removes enough to greatly reduce the time needed to remove them during proofing. This is not something that can be easily automated, so I continue to give more weight to this feature than I probably should. In the case of the other AI engine that I also use, the carrot in that one is the unlimited upload hours per month feature, which is significant (Descript's subscription model is limited to 30 hours per month). But I am continuously on the lookout for an AI engine that checks all the boxes for transcription accuracy, ease of use, overall cost, and removing repeat words. Review collected by and hosted on G2.com.