Working on different systems, more often than not I see that the performance capabilities of the system are either not clearly visible and defined or they are completely missing.
A lot of the developers developing features are not aware of the impact their code does to performance. By Impact I mean visible numbers of the load the system can take that go together with every build.
So the question here is:
Is performance testing your system a part of the continuous delivery pipeline or is it done as a periodical effort separated of the release process?
Top comments (2)
Speaking specifically about small companies, in my experience, performance is not something taken into account the way it should...
What I see the most is people developing things and releasing it because "it works", which leads to a bugfix 6 months later, since the aforementioned feature is now a function called hundreds of thousands of times a day, destroying the app server and the database...
On the other hand, we have larger companies used to larger-scale development and data processing, I look forward to other people's reports on that :P
Periodical effort. We keep an eye on the monitoring systems, and have set up alerts for the spikes. We have a general idea of what's the current performance and the max capabilities of the current configuration.
The apps are not that big though