I've worked as a web developer for over 5 years and currently work on multiple websites through my part-time work, contracting and my own projects. A constant issue I've observed when working solo or in smaller teams is that it's too easy to neglect website testing and that will end up hurting in the long run.
The following has happened to me more times than I care to admit:
You build a new cool feature and push it to production. You check that the Cool Feature you just updated works. You feel good and move on to the next important task (like checking Twitter) and forget to test if you accidentally broke anything else. After a while, a coworker or customer notifies you that something broke and then you have to fix it quickly. Ouch!
Now, of course, this situation can never happen when you have perfect tests and monitoring systems set up! I have yet to see this dreamland though. If your company has a dedicated expert testing team this may be a smaller issue, but even the biggest companies go down from time to time.
Continuous testing offers a solution to all problems. However, the issue is that there's always stuff that feels more important to do right now. My experience with UI testing solutions that require you to code or record the tests yourself:
At first, you set up a few tests and are happy for a while. Then after a month you develop a new feature to your site and forget or don't have time to record a test case for it. Later on, you change the landing page layout and break the first test's selectors... The result is that you have broken tests that cover only half of your features, and that is the point where I stop trusting the tests and go back to the starting point.
I felt like the costs of creating tests were too big and started exploring ideas on how to automate web testing. First, I created logic that logs in to websites as that covers most critical parts of a typical product website: that the landing page, authentication, and product side works.Sitebot logging in to sitebot.ai
This was slow to develop further and not applicable to all websites, so I decided to try something else. The best solution I found was to source the test creation to users.
What I ended up creating was a 1kb js snippet that anonymously records users' clicks on a website. After enough data is gathered, a program finds common patterns from the data and generates tests patterns. The patterns are then converted to Selenium commands that are run on a Headless Chrome. Test validation happens by checking where clicks and other actions lead to and detecting changes and errors.
I built a landing page and a dashboard for sitebot.ai and launched it as a SaaS. You can set up scheduled monitoring and test a website on demand by calling webhook or API. Sitebot then alerts you by email if something goes wrong. It's still in beta but it works!
I personally run Sitebot on five different websites and it has detected a few critical errors in the last few months. I run tests automatically in my update scripts and run scheduled tests to catch issues while I sleep. In combination with HTTP monitoring service like Uptime Robot I've managed to keep my websites up and running with minimal effort and stress.
I'm now focusing on marketing and expanding the service. My goal is to get more customers and reach profitability. I run on AWS and got credits through AWS Builders so I'm secured for the near future.
There are many directions to extend the service, like self-updating tests and better performance monitoring. There are also possibilities to add machine learning to the mix. (I registered the .ai domain with high hopes but decided to not market it with AI terms as I got some backlash on Reddit. I think the domain is still cool though.)