Conf42 JavaScript 2025 - Online

- premiere 5PM GMT

Making Docs and AI Work Together: Improving Context, Accuracy, and Agentic Coding in the IDE

Video size:

Abstract

AI can speed up the workflow, but without the right context, it can just as easily slow it down. This session explores how to balance AI assistance with documentation to maintain accuracy and control inside the IDE. Learn how modern APIs like Cloudinary, Auth0, and Firebase can work hand in hand with AI tools to generate cleaner code, automate tasks safely, and accelerate development without guesswork.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Let's start this talk by setting the scene. You're a JavaScript developer. Maybe you're working by yourself, maybe you're working with a team, or maybe you're even leading a team, and your goal is to use AI for real impact. Now the challenge is, do we really know what that means? How do you use AI for real impact? What does it mean to use AI effectively and what are the measurable benefits that we're expecting to achieve here before we actually achieve our goal? We have to have a vision of how we're gonna use AI in order to be effective. And once we understand that, the next step is to think about those hallucinations that we all suffer from, and we have to figure out what are the best practices that we need to agree on in order to make sure that AI is accurate enough to be trustworthy. Because our teammates aren't gonna buy into it, they're not gonna wanna use ai, even if we promise them real impact, unless it actually works well. Hi, my name is Sharon Nik. I'm Senior developer Experience and API content writer in Cloudinary. And tonight we're talk, gonna talk about making API, docs and AI work together, improving context accuracy and agentic coding in IDE. I hope you enjoy this talk and I hope you find it useful. Over the past year, I've had what I can only describe as a transformative experience. I was on a team, a very large team in Cloudinary that worked on turning Cloudinary APIs into model context, protocol MCP servers. That process completely reshaped how I think about documentation accuracy and where AI truly fits in a developer's workflow. So today I wanna share the takeaways that changed my way of thinking. First of all, if we set up our environment correctly, we don't have to leave the IDE. We can code, we can search docs, and we can apply other tools from our tech stack from directly within the IDE without having to leave it and without having to change context to do all of those tasks that we used to have to juggle with. I also learned that the big problem holding AI back is not only accuracy. But the context that we provide ai, we'll talk about that a bit more later. And finally, I discovered the magic that happens when we connect documentation directly to AI clients in the IDE. So onto our first question of the night. What does using AI effectively actually mean? Our talk is about using AI within the IDE. So from now on, we're gonna call that using the LLM client. The LLM client is the AI inside the IDE that reads documentation reasons and performs actions. And we're gonna talk about two use cases. I think the central two use cases that you can actually perform within your IDE. The first is code generation and refactoring. And this is of course when you're working in your IDE and you have your AI panel open and you ask the LLM client to please create code for you or generate code for you that does X, y, and Z for your app that you're working on. And the second use case is to actually perform those agentic tasks within your IDE. And this is the AI executing actions like uploading assets or configuring environments. Something that's actually more like on the backend or more like configuration that you wouldn't do within your program, but you would have to leave your IDE in order to do it somewhere else. So instead of having to leave and do this elsewhere, we're gonna use MCP servers to handle authentication, validation, and execution so that the AI can actually act safely for you. So when we created the MCP servers for cloudinary, our goal was simple to enable ides like cursor, cloud code, and VS. Code copilot. Not just to suggest code, but to also use cloudinary directly managing images and videos, adding metadata, configuring environments, and automating workflows. So now that we understand how to actually use AI effectively within the IDE, let's talk about some of the measurable benefits that we can expect. So first of all, I'm not gonna go through all of the statistics here. You can have a look at it yourself, but lots and lots of developers are using APIs, third party APIs in order to perform tasks within their apps. And the statistics are high on that. So from these statistics that we see all of the APIs being used so heavily from next to author, to Stripe to cloudinary, and much, much more. APIs are the connective tissue that make modern JavaScript apps work. And when we use the LM LLM client with the APIs, we absolutely streamline their use. In combination with that, let's look a little bit at what is happening in AI development. Developers are saying that it's speeding up their process. They're saving time. They can do the work that they usually do in a day, in a few hours and. This is not only for code generation, but this is also like we described before, the agentic tools that allow you to actually perform actions that you would normally have to leave the IDE to perform. Now, of course, here comes the caveat. We've got poor stats on how much developers distrust API tools. Only a small share of developers, just 3.8% report experiencing low hallucination and high confidence. So this is really not so happy yet. We need to find ways in order to boost the confidence and boost the accuracy. Also AI agents are still a minority. People are not catching on yet to using MCP servers. And you could look at the stats about how many developers are using these agents already. And you can see that maybe that has to do with some of their worries on accuracy, some of their worries about data security. And this shows that we're still not using AI agents and we're still not utilizing AI to its full benefit. Finally, the data is clear that missing context is the problem. Hallucinations are not the root problem. It's just a symptom, and the core problem is that we're missing context. The AI doesn't know the details of my project. It doesn't know the details of my architecture. Naming conventions. The product that I'm using, the API documented rules and it just guesses every time it makes a guess and it's hallucinating and it erodes the trust. So we can look at this negatively or we can say that this is an opportunity. We have an opportunity because we know that once we get our engineers to have high confidence, the stats on enjoying and. Improving and how AI makes jobs more enjoyable. All these stats go up. Confidence creates adoption. And the lesson is the better the context, the better the LLM client performs and the easier it is to get buy-in from the team. So how are we gonna do that? How are we gonna improve context within our LLM client? The solution is documentation in the past. Documentation used to give developers what they needed instructions for setup code, snippets, and examples. And now our LLM clients are gonna consume them right inside the IDE. The LLM client needs documentation as its source of truth, just the way we do reading them, interpreting them, and using them to figure out what to do next. And without them, the LLM is working a bit blindly. So what best practices that we should agree on in order to make sure that AI is accurate enough to be trustworthy? The best practices that we're gonna talk about apply to all of these APIs, auth O Firebase, super base, slack, Shopify, Cloudinary, and many others. We're going to use Cloudinary for the purposes of demonstration, and we're gonna revisit the other APIs at the end just to review what you could do with them. So we're gonna start with the basics of just adding Cloudinary MCP servers into. My id, I'm gonna show this to you in cursor, but of course you can go to any of the other IDs and you can look up their documentation and see how to add MCPP servers in those IDs as well. So I'm gonna copy and paste. These are three of Cloudinary for remote MCP servers that are offered. And I'm going to copy this code and I'm gonna jump into the IDE. It's a cursor. I'm going to open cursor settings. I'm gonna open tools and cps and I'm gonna add custom MCP. And right within these two brackets, I'm gonna the code save it. And if you look at the co cursor settings, I'm. C I'm connected. Now I'm connecting to these three servers, but I need to authenticate, so I'm gonna click on this, which reroutes me to my application. Login. I'm gonna log in. I'm gonna choose my cloud name, my product environment, gonna accept access, and now I'm connected. Okay, I am gonna do the same authentication for the others, and now all my tools are enabled. Great. So once that's done, I can go ahead and dogen tasks right within my IDE. Okay. Back to the slides. What's the next thing? Next? I'm gonna reference the IPEs doc site in my ip, in my IDE. So we're gonna do this for Cloudinary. We're gonna go to cursor settings, indexing in docs, and we're going to add the URL for Cloudinary documentation to the docs in the IDE. And every time I make a request, I'm going to remember to add context and select those docs. So let's see how this is done within the IDE. Okay, so we're already in settings. So now let's just click indexing in docs. Let's go to Add Doc, and we're gonna type in Cloudinary documentation. Let's call this Cloudinary Docs and we're gonna confirm. So the next best practice that we're gonna talk about is contact seven. Contact seven provides an MCP server that indexes the code snippets and examples from the API documentation. This means that your LLM client has access to all of those examples and therefore its auto complete and generated code are accurate, derived from the documentation and not from guesses. So not all products are yet indexed in context seven. So when you do work within a certain API just make sure that your. Product is already registered together with context seven. And don't underestimate your influence. If users really want to use context, seven APIs will start considering that and using context seven more and more. So how to add context seven in cursor. What you're gonna do is you're gonna go to cursor settings again, you're going to go to tools and integrations, and you're going to add the MCP server for context seven to that M-C-P-J-S-N file that we saw before and since we've already had a chance to add MCP servers. So let's not demonstrate adding the contact seven one and just assume that you can do that on your own. Copying and pasting this code here. Okay, so what comes after contact? Seven. Tip number four. Last but not least, is leveraging rules. Rules. Files are how we move from AI that guesses to AI that follows your standards. In cursor there MDC files, but the same concept exists in other IDs. For example, JetBrains uses MD and VS. Code uses Jason or yaml. The format doesn't really matter. It's just the principle that does. These are short example rich rules that your LLM client reads before it writes. So every suggestion it makes, fits your conventions and runs correctly. Some APIs already provide rules or compatibility references that you can use right away. For example, Cloudinary includes a document that explains how to create transformations correctly, including how the order of transformation parameters affects the results and which combinations are invalid. You can also create your own rules files to help your LLM client perform tasks independently, for instance. In our docs as code setup, we created a rules file that teaches the LLM client how to create a new markdown page, migrate an article into it, and apply all the required formatting and configuration automatically. Of course, the human eye has to review all that and make sure that everything's been done correctly. So let's talk about how to do it. If you've got a rules file that's been provided by the API, you could simply add it to the documentation, just like we did with the Cloudinary General Docs. For example, we've got the Cloudinary transformation Rules file which is an MD file. And you can just add it the same way we did before. So I'm not gonna show it to you in the IDE again. You'll go to cursor settings, indexing and documentation. In the doc section, you'll add the doc and you'll enter the name of the rules file. We'll see a little bit later how when we actually see an actual example. Every time we ask for a prompt, we're going to add context and we're gonna add that particular doc to the request that we make. Okay. Now even more powerful is that you can create your own rules file. In cursor you're going to put your rules file in the doc cursor rules folder, and if you want it to be project wide, you'll just put it in the doc cursor file. It's gotta be formatted as a markdown with clear headings plus lightweight examples that the model can copy. Okay, so when you are creating your rules file, here are a few tips and tricks. When writing rules think in terms of the four Cs compatibility note, which parameters, SDKs or ais play well together conventions Explain what your name naming folder or architecture standards are. Constraints, meaning what the AI must never do. For example, exposing secrets or calling admin APIs from the browser. And canon meaning copy, give it short, correct examples and code snippets that the model can be used in other scenarios and guardrails. Give the idea a range of what it can or can't do, what it always should do or should never, ever do. Here are a few cloudinary examples. We always want format auto and quality auto to be applied so that our images are always delivered optimized. We want our text overlays to always look legible, and so we're going to give it a standard. We're gonna give it instructions for how we want that formatted. We're going to forbid GFE to be used with CPAD because those are incompatible. And with this, I have safeguarded my code that gets generated by the LLM to make sure that I'm not going to have bugs, that I'm going to have a really hard time finding. And finally, maintenance tips, keep rules short, one to three pages, review it often and include links to the official documentation. Now I'm gonna show you a rules document that I created. And we're actually gonna soon see how we actually apply it in a real use case. But we can see that some of the things we talked about are here, for example, using auto format and WA quality on all image deliveries. Here's a new one. I want my IDE to always. Apply name transformations. When I'm transforming images of a certain type, for example, hero images or thumbnails. I don't want it to apply ad hoc transformations. I want it to either apply an existing name transformation, which is like a template, or if none exists for that type that I'm requesting. It should create the name transformation and then apply that. In that case and in all subsequent cases, and so I've got a little bit of a description here, a little bit of an explanation of what I'd like it to do in the case where I'm transforming images of a certain type. Yeah, so we'd like G Face to always be used with sea auto in order to get the effect. That is most close to my standards of cropping, cropping around the face and an automatic crop that keeps the main subject in the middle. Here's my text standards, my text overlay standards that I wanted to always keep along with. A little bit of a example a code example of how to achieve the requirements that I set out here. Okay, so finally here's a little overview of other ides. The file extension that you need to use for the rules, files in those ides and location that is required for those rules files can review it on your own later if if you don't have cursor and you use one of the other ites. Now we've made it to our finale. We're actually gonna go through a cloudinary use case right within my IDE, and we're gonna see how results become more accurate when you've got all of those best practices in place. So let's go through the scenario first, and then we're gonna go into the IDE and actually implement it. So first we're gonna genetically upload an image and then we're gonna use that image. So Genically uploading it is going to save me that context. Switching, I'm staying in the IDE and using the MCP servers instead of switching tools and going to the cloudinary console. So once that image is uploaded, I'm gonna go ahead and use it as the hero image of my website. The original image is portrait, so I'm gonna have to apply a name transformation in order to make it a landscape orientation so that it could fit as my hero image. Next, I'm going to take that or original image and I'm going to apply a text overlay to it. We're gonna see how the best practices are implemented from my rules file. And finally, we're going to transform that same image into a thumbnail. And I'm gonna, again, use the transformation required in the rules file. So let's go right into the IDE and see it in action. So here I've got my index html file, which already has some code in it. I've got a placeholder for the hero image, and I've got the source that is not yet populated, but it's going to get populated by the code that I'm gonna put into my index js file. Over there, I'm going to transform that original image that I'm going to upload to Cloudinary. Right now. I'm going to ask Cloudinary to upload this image to my product environment. The public ID should be Father's Day banner. It is giving me a prompt and I'm gonna allow it to run the upload. And there we have it. I've got my image, and it even gave me my image, URL that just got uploaded. Now for the Euro image, I want the LLM client to, first of all, genetically create a name transformation for hero images. Then I wanted to apply the name transformation in index as to the image I just uploaded. And finally, to use the generated URL as the source for the hero image in index html. Let's add all our docs as content. First. Click the Ampersand docs, Cloudinary docs, and the transformation rules file. I don't have to add my rules file that I created. And that's because I already said it to always applying. Now let's add the prompt and see what happens. Transform the specific image, make it a hero, type with an height specified and apply the transformation in index g js and uses at a source is in index html. Okay. Checking if a transformation like that already exists. It found one, but it's not quite what I specified, so it's going to create a new transformation, plus it added the code to apply it. Amazing. So let's take a look and see how it did, and this is what our gorgeous HTML page looks like. Now let's go back and add something. Let's add another image below that, which will contain a text overlay. So I'm gonna add the prompt again. Add another transformed image under the hero image same public id and add a text overlay with a text specified. Again, generating it in index js and rendering it in index html. Let's see what happens. Remember, it should take the instructions that I gave it for how a text. Overlay should look and apply it in this case. Okay, so it added the information and the code to index js it, added it to index html, and let's see if it worked. So let's have a look at our updated website. We've got our hero image and we've got our text overlay example, and here's the text that we added. It looks good. It's the right font, nice clear font. It's white and it's flushed to the bottom. And that's what we asked for in our rules file. Okay. So let's do one more. This time we wanna crop a thumbnail around the face. We had a rule for using G face only with auto crop, C auto, so let's see if we can trust it to do that accurately. Okay, so let's paste in our prompt. We're gonna add another transformed image under the text overlaid image. We're gonna use the same public id. We're gonna crop the image to focus on the face, and we are going to add upscale effect for even sharper looking images. Okay, let's see what happens. Okay, we've got C Auto and G face. Great. And let's see what our updated website looks like. Okay, so let's find our thumbnail. And there it is. It looks just right. It's little bit big, but we didn't make any rules about making it smaller. Okay, so let's get back to our PowerPoint slides. Just to sum up. Here is a list of APIs that you can use along with MCP support and also contact seven indexing. All of them have a few examples of how you can use them age genetically. Let's just take a look at one or two of them. We've got OAuth where you can set up login and authentication flows. You can manage test users or refresh API keys securely. With Shopify, you can add sample products, draft test orders, or connect webhooks for your stores. Here are a few ideas of rule files that you can create. You can set API fetching rules, for example, how to call APIs using fetch or axios with consistent error handling and no hard-coded keys, and you could create form handling, handling rules. You could set preferred validation libraries structure for controlled inputs and submit patterns. So hopefully after today you can leverage your LLM clients accurately and effectively use it for code generation and I ident agent tasks. Be confident that it's correct when implementing dog-friendly techniques. What's coming up next? Docs are about to stop being reference pages and start being runtime components. There'll be adaptive, dynamic and self-improving. We're already seeing a lot of that, how adaptive and dynamic they are. Self-improving, not yet, but soon. The next generation of docs will live inside our ides, which they do already. They'll have power of re reasoning and execution, which they do already, and they'll evolve dynamically from real usage. And that I'm really looking forward to seeing. So thank you. Feel free to email me. And I hope that this was a useful session for you.
...

Sharon Yelenik

Senior API and DevX Content Writer @ Cloudinary

Sharon Yelenik's LinkedIn account



Join the community!

Learn for free, join the best tech learning community

Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Access to all content