Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, welcome for you two.
It's a huge pleasure to be here, to talk to you.
So before we start, I would like to invite you to think about something.
I would like to invite you to think on.
Why do we test?
And it can be because it brings you the reliability that you like can be
because you want to make sure that.
Your code is doing exactly what the project manager and what
the product expects it to do.
It can be because your boss told you that you have to you might not even be testing.
So I would like to invite you to reflect on this during this talk because hopefully
by the end you have a visibility of what's been happening in the past few years.
Where are we going?
And maybe this might, this question, this answer might change.
So I'd like to think as well on if has any of these issues ever happened to you?
If you ever had to deal with flaky tests, with complex tests, with issues
with mocking, issues with tests that are hard to maintain, having testing tools
that are hearts slow tests, having too much boiler plate, lack of type script
support, or AKA via SAM support there.
All things that we have struggled with historically when dealing with testing.
And this leads us to our pain points.
So when I decided to start doing this talk, I decided to look at the state
of JS survey, which actually is already open right now for the next year.
So please go ahead and if you have time, feel it.
It's very helpful for people like me who are doing content for
you, but it's also pretty helpful for open source maintainers.
It's open for the grateful for the community, and anyone who has to make
decisions around the JavaScript date.
World.
So if you have some time, please fill out that survey.
So back to the talk.
I started thinking, okay, where is our pain?
Because I had all these points in the previous slide, there are things that
I've struggled with, so I looked into the state of JS survey and it told us that,
okay, mocking is the number one pain.
Then we went into configuration performance, JS ESM, and common js,
excessive complexity, flakiness.
That's 0.8, which is shows empty is browser testing issues, end-to-end
testing, lack of documentation, TypeScript support, unit testing, debugging.
And it seems like the testing world is, we're going on with a lot of pain,
but how happy are we with what we have?
Where does these with us?
It's a 2.5 out of four, which is not Whoa.
Of testing.
It's yeah, okay, let's write another test.
So with these pain points and with this data, I decided it was time that I did
the 2025 state of JavaScript testing.
So that's where the stock is going.
But before, let me just introduce myself.
My name is Anon and I'm a developer advocate at PagerDuty.
I'm part of the solid JSTX team.
I'm an instructor at Egg io and I'm the author of a book called
State Management with React Query.
You can find me pretty much anywhere online at Daniel g CFOs or.
On page three comments aside from this, I maintain a newsletter called this Month
in Solid, where pretty much every month or so I can, I go over what's happened
in the past month and the solid land so that people can be updated to it.
So if you kinda charact that we have here under the right, basically it'll
take you through all of my links.
When I started this talk, aside from the pain points, I was taking a bit on our
company's testing and I have this data that tell me that 60% of the company,
62% of the companies are using Jest, storybook, V Test, playwright Cyrus,
testing, library Mock moca moca, sorry, puppeteer, selenium Box Service Worker, no
Test Runner, web Rev io Test Cafe, and 13% are not even testing at all as it seems.
But this gave me information about the tools.
It didn't give information on how the tools are using are being used.
So I DMed a couple of friends of mine and I tried to get some data outta
them and the question that I asked them is, Hey, could you tell me how
are you writing tests in your company?
And eight of my friends got back to me, so let's review these cases, shall we?
For the case one, they're using, they're writing BD, D tests with playwright.
So we have driven development.
They're using covering Gerkin to write these tests.
They're using jets and React testing library for unit and
integration, and they're using storybook as a component catalog.
For the case.
Two, they're using Jes and React testing library for unit integration
and Cyprus to end-to-end case three.
They're using Karma and Jasmine for integration slash component
testing and jest and react testing.
Library for unit playwright for visual regression playwright for
end-to-end and storybook as a component catalog for the case.
Four.
They're using Jes and enzyme for integration and we driver
io plus Cucumber for end-to-end.
Okay.
We start seeing some patterns here, mainly around the unit and integration.
Let's see what the next cases bring.
For the case five, they're using J and React operator for unit integration.
Clear right for end to end.
And this is the first person that mentioned mock service worker.
So for those who are not familiar with mock service worker or MSS W for short,
it's basically an API mocking tool that allows you to define your API schemas
and ways and handlers that will respond whenever a request is sent to these.
Request and basically, so it intercept a request and returns
your mock data for the K six.
They're using just and React for unit integration.
And K seven was the one that I think that surprised me the most.
I don't know if it's because it's a more recent company, but first,
this is the first person that actually considers static analysis
as part of their testing tool kit.
So if you're looking at a testing trophy.
Static analysis sits at the bottom and this person remembers of that
and I was very happy around that.
So they're using TypeScript and Yes, link.
And this is also the first person that mentioned V test instead of
just using V test and React testing library for unit slash integration.
For end to end for the final case, Cypress, for end-to-end
an enzyme slash react testing library for unit integration.
And the way that this company is working is the whole tests
are all written in enzyme.
The new tests are written in a React testing library.
And if for some reason you need to touch a whole test, you migrated
for the React testing library.
And that's the way that they can ensure that the migration happens
successfully and without any hiccups.
And they're also using mock service worker, which.
Once again made me very happy.
So this is the testing trophy I was telling you about where we
have study analysis at the bottom, we go to the unit one unit tests,
integration tests, and then to end.
And I think we should go over each one of these points when we are
considering what's the state of 2025 testing in the JavaScript when, so let's
start with the static analysis part.
I think to the surprise of no one TypeScript and the are winning and
I don't see them going anywhere.
Especially now with TypeScript being rewritten in Go, it's becoming more,
it's becoming faster, more performant.
It's a de facto in the industry.
And personally, I won't want to change and I don't see a lot
of people wanting to do that.
However I would've recommended that you keep an eye out for tools like Biome
and OX Wind, which are actually being used already by some projects and.
In some scenarios specifically, like when you're talking about X wind, it says that
it outperforms outperforms the rest in 15 to 20% in short to medium sized projects.
So that seems a good thing to, to look into or at least keeping your radar.
So now let's talk about the unit and integration testing.
And I think for us to.
Talk about unit.
We need to talk about integration because there have been deeply
tied in the JS land for the last.
10 years or so, let's say it that way.
And I think it's good that we go with an history lesson.
10 years ago we were writing tests using this thing called Karma and Jasmine.
And these tests were what we used to call component testing.
And the reason why they call it component testing was because we were running this
tests in a browser environment, but we had some issues with this and with this, first
one was these tests tended to be flaky.
They were slow.
We had issues running them in CICD, so pipeline, using them in a pipeline
was painful and it was unreliable.
So we needed to have a transition.
We needed something better.
I don't know if you transitioned for something better, but we transitioned
to something that worked because then we moved out of component testing and into.
Jes and which is a Jes dom based testing tool.
And basically what now we are doing is instead of running our tests into
a browser, we're running them in node.
So what the Jes dom does, it probably fills some browser APIs or some the
most browser APIs so that basically your tests can run in an load based
environment without you having to.
To worry about what we had before, and this meant that these tests were faster.
They were not as flaky and they could run in CICD because it was all based in node
and this was where we were until 2018.
And in 2018 the testing library came out and with the testing library came.
Its main philosophy.
The more your tests resemble the way your software is used, the more
confidence they can give you and.
This meant that now we were writing our tests from a user-centric approach.
There were no longer implementation details.
Focus on our tests now.
We were writing our tests as if we were the user.
But this caused a bit on what I like to call the backend fallacy
and the categorization pain.
What I mean with the backend fallacy is when I learn my tests and I learn
how to test, I was a backend developer and I learned it from a backend
developer perspective, which meant, okay, when we're testing a unit,
we're making sure that we're going to that unit, implementation details.
When we're testing an integration, we're testing integration between several units.
But now that implementation details bit didn't exist anymore and.
Then we led into where we are with the categorization pain because
some people, what a unity is not what a unity is for others.
And this LED will led to different companies and different organizations
have different definitions of what testing is for them.
I've worked in a couple of companies in the past.
Six seven years.
I've met a lot of people that work in different companies in the past few
years, and I can assure you that most of them have different testing definitions.
So we are in this categorization pain, which for the good or for the bad.
It helped us in a sense because, now we migrated to using the testing library
and we can say that's where testing started to become fun, fun for a lot
of people, and made it even better.
So what's our next transition then?
It's all about test runners and ware integration.
I don't know about you, but if you ever had to configure JS to run with Webpac,
you know it's a configuration help.
It's painful.
I spent hours and hours going over that web pack file,
and I gotta be honest, just.
It's stuck in time.
What I mean by being stuck in time TypeScript support is experimental.
ESM support is experimental and we're 2025.
We have things like storybook 10 that is going ESM only already.
And they're reporting that it they have 50% decrease in package size.
And in the meantime we have tools like Jes, where DSM support
is still experimental and.
That's like mind opening or mind blowing in a sense.
But it's not just that we needed a test runner that is aware of its environment
and it works together with a bundler.
So in a sense, we want to build our tests in the same way that we are building
our application, which is something that we were not doing until this time.
So that's when V test showed up and V, so basically V test and work was
working together with V and basically.
We're using the same tool that we're using to bundle and run
our code to run our tests.
And we even need this another transition.
So instead of using just pure gto, we were using this thing called
epione, which for performance reasons got rid of certain APIs.
And it's basically a faster version of j Om.
But when we're talking about J Om, then it takes us to the beginning of this year.
And that when we started talking final, finally talking about the JS Om issue art,
the creator of a mock service worker wrote this amazing blog post called Why I Want
to use Js Om for today's Attention Span.
I know it might be a bit too long, it's 10 minutes.
But I made sure to make it shorter so that everyone can get
grasp the fundamentals and why.
We have issues with the js om.
So the first part is js.
Om runs a node and fuss browser APIs, so you don't have an actual browser.
It's executing browser wide code without having one.
Artize This code there that I really which is, which says Js om
runs in node js pretends to be a browser, but ends up being neither.
So what's the suggestion here?
What should we do?
Art closes up the.
Closes up or has this around mid halfway to the end of the blog post that says, the
closer your test environment resembles the actual environment where your code runs,
the more value your test will bring you.
And if you're familiar with the testing library, you might be
familiar with this quote because it sounds a lot quite the same thing.
So what is art telling us?
It's telling us it's about time that we go back to the browser and it's recommending
that we use V test browser mode.
So what V test browser mode does, it's basically what I just tell you.
It allows you to run your tests in a browser environment.
You don't test to run them in node.
Again, you're going back to the browser because our technology
finally allows us to do this.
And I know you might be thinking, oh, it's gonna take a
lot of refactoring to do that.
Not really.
All you have to do is basically swapping out to imports, because now
we migrated from the testing library into actually the V test browser slash
framework of choice and making sure that we, our tests now are synchronous.
And I, while was doing this migration as well myself, I noticed five
points that I wanted to have here.
These are from a React developer's perspective, but here's what I noticed.
The first one, I got rid of everything that was query by and find by, because
everything now is a get by query.
Second point, got rid of all the wait four from Reacts library because now v test
browser mode will have automatic retries out of the box, which is super powerful.
You don't have to.
Wrap it with anything.
All we have to do is say a wait and it'll retry during until
time out time change mock service worker to run into worker mode.
So instead of intercepting stuff in node based environment, we'll
use the service worker that it runs in browser to actually intercept
requests and return those mock data.
Point four, turn test synchronous and 0.5, get rid of unnecessary mocks because
now we have access to the proper browser.
API.
So this meant I got rid of stuff like mocks on the loca storage because I can
actually use the proper loca storage.
Now I don't have to mock stuff that I. Didn't have access to before
and that made me really happy.
I love deleting code and I love, and this migration was super, super smooth.
I know you're thinking now about of what, about time,
what, how slow these tests are.
It was not that a significant increase.
It was like two to three seconds.
And to be honest, I'm okay with two to three seconds slower tests
if they're actually running in the environment where the code is running.
It gives me confidence that it's going to work.
So when I was preparing this talk and I was doing it for the first time,
I thought, oh, I wish I could, aside from this post from art and this
thing that I'm doing here, I had some.
Something that would help me convince you to use the testing library the
testing library if you test browser mode and migrating to these things.
And the author of the testing library itself wrote this post.
Exactly on that same day, which made me very happy.
And here it says, never been so happy to see people on
installing the testing library.
Honestly, it's incredibly validating that the testing library has been successful
been so successful that it's been adopted and rewritten natively by playwright.
And we test my work here is done.
So that's our next transition.
Our next transition and where I think and hope that people will move into the next
two, three years is basically just v test.
The testing library is there as represent representation because the fundamentals
of are is still going to be important and we're still gonna have to use them.
But basically how we have, and all we're going to need is V test and
v. And I'm pretty happy with that.
So yeah, the feature is in component testing.
Let's go back to the browser and let's see where this is going.
I feel like in the web land, and especially in the JavaScript land, we're
finding ways of constantly picking up old concepts and stuff that was not working
in the past and bringing them back and.
I'm happy with it because the technology now allows us to do this.
I have an example when we're talking about frameworks itself.
Until 20 20, 20 21, everyone was so happy having single page applications.
But then someone started asking the question, our bundles
seemed to be a bit packed.
The stuff that we're shipping to the browser seems packed.
Is there a way that we can start doing server side rendering again?
And that's what we did because the technology then allowed us to do
server-side rendering in a better way that we were doing back in
the old times before going away and on single page applications.
So the same thing happened with server components.
The same thing is happening with component testing.
We are back and I hope to see how it.
This is growing.
So the next point, we're talking about end-to-end testing, and I
think the surprise of no one who write has been winning the adoption.
And it's not just because it's cross, cross browser, cross
platform, cross language.
It has stuff like cogen and tracing out of the box and as things like parallelism,
which was paid in Cypress until a couple of versions ago, I'm not sure
if it still is out of the box for free.
And it's not just that the community work that playwright has been doing.
A hundred percent shows why they're winning this adoption.
And if you're picking something up, I would go with playwright.
But one thing that I was noticed when I was doing the research
for this talk is that end-to-end solutions are becoming more than just
end-to-end testing because all of them have versions of visual testing.
All of them have versions of component testing, even though it's
on play is currently experimental.
All of them have versions of API testing and all of them have
versions of accessibility testing.
And it's not just them.
Things like tools like storybook that many of us considered just
for our design systems also allow you to run all of these tests.
So it's very interesting to see how these solutions are trying to
cover the market for all these other type of tests and what is going to.
Survive on the non tool, solution part.
So are we going to migrate even our component tests into this and not, and
only write unit tests on the other side?
I'm very curious to see what's going to happen there, because one thing I noticed
during the server component storyline of all of this is that many people didn't
know how to write integration tests for them, and then basically they just ended
up doing end to end for everything.
So I'm curious.
But I'm optimistic that this is going to be the new testing trophy.
This is going to be where we are shifting.
So static analysis on the bottom unit, test a bit above it.
And then we have component and end-to-end testing and going over all of this.
We have visual and accessibility testing, so I'm excited to see
where we're going to end up.
And this kind of covers the.
Static analysis, unit slash integration and end-to-end section of testing.
Now, one of the biggest pains that we people talked about was mocking, and I
want to talk about mock service worker.
A bit more because to be honest, it has become the industry
standard and I'm very happy for it.
If you want to get a bit more information about MSW, its
documentation is pretty great.
If you want to know what I, why, I love Mock Service Worker,
you can search on YouTube.
I have this talk called, all You Need is a contract, which basically
gives you a 30 to 40 minute explanation on why I love MSW, but.
In a short sense, it supports rest, GraphQL and web socket.
It works for the browser for Node and for React native and.
The community of it.
It's been working a lot.
So it has things like SW Auto Mock, which basically is a COI tool that
generates random mock data from open API DEF definitions for Ms. W.
You have playwright MSW, which is a package that provides a better developer
experience when you're mocking APIs using playwright, and it's working on some art.
It's working on something called Cross process interception.
Which basically allows you developers to control network requests across
different process boundaries.
And this is useful when we're talking about stuff like server and client site.
So it allows you to mock data on the server and ask you
to mock data on the client.
It's very exciting and I wouldn't pick any other solution when we're
talking about network mocking and it just makes it so easy.
So yeah, that's my suggestion.
When we were talking about mocking.
Now we're in 2025, so obviously we need to talk about ai.
And when we're talking about ai, obviously the first part we talk
about is copilots and Quad and all of our Paraprogramming buddies.
You have two thoughts here.
The first one is they'll obviously speed you up, but you make need to make sure
that you check its sources, especially when you're playing around with things
that might haven't been around for a long time, like component testing or the
test browser mode and all those things.
Make sure that the test that it's writing, it's actually giving
you the things that you need.
Because this leads to the second point, which is we shouldn't use AI to.
Delegate our thought process and our knowledge gap, our knowledge,
because then we're going to end up with a knowledge gap if we have
AI generating everything for us.
Next time, we're gonna have to refactor these things.
We won't have the knowledge.
We'll just be delegated everything.
So don't use AI as an excuse to not learn your stuff.
Make sure that you actually learn them and then you work
together with AI to speed you up.
Next MCP model, context, protocol.
It's everywhere.
Everyone is going in, all in on MCP.
You have stuff like playwright, which is pretty, pretty good.
You can just tell them, Hey, open this website.
Navigate around it, use certain actions, and then leverage the tools that you
used to do these things to generate me a playwright test, and it'll do you that.
It's insanely good, and it's not just playwright that build an MCP server
meet scene js, which is a testing attesting tool created by ance, which
we're talking in a bit as an MCP server.
ES went as an MCP server.
So everyone has been going on MCP and the hype doesn't seem, seems
to be getting better and better.
So let's see how it keeps progressing.
But now you have these tools.
Your hands reach to use and leverage.
So one question that a lot of people ask me when we start talking
about AI in testing is, how far are we from self-healing tests?
There was this blog post last year on the ministry of Testing that was pretty,
pretty good and at this example of how this company was already using some AI
models and some stuff to have a level of.
Self-healing.
This was in 2024 now in or in 2025.
So I believe that this model might be bit outdated because now we have things
like auto playwright side prompt and mid scene that allow you to basically
just use natural language to write your tests and 'cause we're using natural
language to write our tests obviously.
These tests are going to be a bit slow because you still have to go to the
LLM or the VLM and do the thing that you have to do, but if for some reason
the logout button changes positions, it's this test is still gonna pass.
It'll recover because it will figure out, okay, on the previous version,
the wout button was here, but on this new version, the Wout button is here.
It's still there.
So your test will find a way to recover and still pass.
So this is be the closest thing that we have to self-healing.
It's not fully self-healing because you're still depending on the LLM or the
VLM to do these things, but it's close.
I'm not sure when we're gonna reach it.
I'll leave that to future Daniel, to present in a state
of testing in 2027 or 2026.
2028. We'll see.
But I'm very interested to see how this grows.
And I was talking about LMS and VMs because there's a
difference when you are using.
Each one of these models, which are testing suite.
So when integrate mid cinema to integrate with GPT-4 four oh Gemini UI
tars and UI tars is basically a point where the model can make a difference
because for you are not familiar.
UI Tars BY is an open source multi-model agent that's built upon.
Vision language model.
So it's a VLM that integrates advanced reasoning and enabled
my reinforcement learning.
So when using EY Tars, you can use target driven style prompts, like log
in with username, fu, and password bar.
Whilst using stuff like GPT, you have to be more descrip descriptive.
So it allows you to basically in one prompt.
Say what it needs to do and what the VLM will do is take a screenshot of
the page and then reason You can see a thinking reason in the background
until it figures out, okay, for a week logging in, I need to find a
username, I need to find a password, and then I need to find a login button.
And it will do that whilst when using GPT you have to be more
descriptive because basically what it does, it takes a, it doesn't take a
screenshot of the page, it gets you.
Your representation of the dome, how it works and passes it via the
LLM because it's all text based.
One thing that we noted though is that GPT is better at generating
assertions compared to a visual language model, but visual angle
language model like QI avatar, these things are pretty, pretty good.
They play doom better than I do.
Yeah, that's a whole other discussion.
So yeah, we started this talk looking at the tools that we had and me
asking a couple friends how they were being used in the organization.
And hopefully by now you'll have a different overview of how they're
being used and why we are using it.
And I hope that your discussion on why do we test in your.
Prompt at the beginning might change a bit during this talk.
If not, let me recap everything that we went to and hopefully by the
end I'll be able to convince you.
So the first point that I would like to make is that everyone will keep
doing their own things and everything.
Different ways to test.
That's the truth.
It's like opinions.
Everyone has their own.
And the same thing happens with tests.
Everyone will like to write tests.
The way that they do, however, tools like TypeScript and Slink are the
best way to do static analysis.
V test has brought the shift in test runners.
We were one expecting component.
Testing is back, and it's shifting us back to the browser.
MSW is a de facto solution for network mocking.
Mocking, and end-to-end solutions are pushing into a testing tool
suite by having, giving you things like API, accessibility, visual
component, and end-to-end testing.
And we're talking about ai.
It won't fix your knowledge gap, but it'll make you more productive.
And AI testing might lead to self-healing, but at the cost of speed.
And finally, we should stop considering tests second class
citizens because they are as important as you code that you're right.
Heck, they're even more important because they are the things that tell you that
the business logic that you wrote is actually doing what you expect it to do.
It's the things that when stakeholders come up to you and tell you,
prove me that this is working.
It's the point where you can point the place where you can point at.
It's the things that's gonna help you make sure that there's not gonna be
incidents, that your stuff is gonna break in the middle of the night.
So the next time that you're thinking, oh, I ate testing, I don't have
time to do this, think about this.
'cause they are the ones that actually give you the confidence.
Confidence in way to sleep better at night.
So yeah, everyone mi that.
Thank you so much for having me.
This was a huge pleasure to be here.
And yeah, you will, if you want to connect with me, you can scan this
care code or find me pretty much anywhere at Daniel Jesse phones.
This was the 2025 state of JavaScript testing.
Thank you everyone.
I'll see you around.
Bye.