Developing, releasing, and productionising software, has many best practices and tooling to make protecting higher environments easier. Whilst many advances exist we have some major concerns with the DevOps process.
- Shared project databases still cause conflicts with source control.
- Playground artefacts end up making it into production.
- External data and artefacts are not code, yet they are vital to the successful functioning of any software.
- Specific deployment order of artefacts may not be controllable by tooling.
- There is too little effort on integration testing. The process is thus developer focused.
- It can prove too risky to start attempting to remove code and artefacts in case regression is introduced.
Whilst we can consider the above to be known as "legacy issues", often this is a simple result of the Software Development Lifecyle (SDLC).
Indeed, we can make a simpler explanation why we end up with legacy issues - write code first, worry about it later. No software company ever built their Continuous Delivery process before they have a fairly large working code base. The fast-paced nature of changing business needs against the slower paced and resource intensive nature of developing software means developers are always releasing code, and then having to work with other teams to automate the release process.
In the containerised and virtualised world, the move to the cloud, we think many of these challenges are solved, instead they are simply relocated.
A specific challenge we face at Info Rhino is one that not all enterprises face but is very common in our experience.
Client specific requirements using generic software
Almost all software written by Info Rhino is reusable and generic. These solutions are customisable per client, their flexibility is what keeps our costs down and our clients happy. Clients will have completely different information within their systems. Clients may want different plugins. This complexity can be reduced by good practice, intelligent code and configuration management, but not eliminated.
We have two main applications which are highly data-centric, working for completely different purposes. Our Data Engines and our Web Data platforms.
Even two instances of our data platform and WDP have enough variability to present major challenges.
Identify core goals
- To protect target environments
- Only deploy what is needed
- Allow data and artefacts to exist outside of project areas
- Reduce complexity
- Automate as much as possible
- Protect our IP
- Avoid overburdening suppliers with complexity
Those are some ambitious goals, but are the result of decades of experience, running multiple projects, and working on many client projects. Much of this is outside of the scope of this article but to summarise, we want to stop a lot unnecessary activity. For example, rather than expecting a supplier to check out all of our source code, spend a long time setting up their environment, ask why? Instead, just ask them to build the components to our requirements and let us have that. We can quickly deploy them to multiple application instances to see if it works.
IRPA and our Web Data Platform
That you are reading this article, is evidence of our working IRPA. We have client specific content, environmental specific configuration, and test environments to allow us to confidently publish to external production environments.
We set up our IRPA to determine exactly what should make it to our test environments. Certain artefacts are pre-build and we are likely to set up a separate visual studio build process to enable these to be pre-built from instance specific artefacts.
To try and avoid overburdening developers and suppliers, we have our CMS specific to each client. We are likely to avoid giving access to our visual studio solution to avoid the major headaches found when onboarding new developers. Instead, aside from internal staff, they will work on specific applications and functionality.
Automating deployment is complicated, simply because of the number of moving parts and the trial and error nature in getting it perfect. Whilst this complexity can never be eliminated, our IRPA doesn't force us to use specific applications or approaches. We chose what is simplest (often rawest) and then once it works, we can stop worrying about it.
Often, software expects us to do things one way only. We have batch files specific for copying applications using xcopy. We also configure copy processes inside our publisher file to move application artefacts because it is more intuitive to do it there. We generate instances of configuration and deployment artefacts to prevent duplication of effort.
Nothing new here?
We found ourselves torn between different continuous integration solutions - Jenkins, Bamboo, TeamCity. There is also Octopus Deploy. For a company as small as Info Rhino, this could mean 2 applications, plus a batch Automation framework, all the server instances, licensing, maintenance, support. It isn't that these are not fantastic tools, it is just we don't need that much.
We are predominantly a Microsoft software shop. We have automated deployments with Nuget Packaging, MsBuild, Project Configurations to name but a few. However, much of our working architecture is not Microsoft. As incredible as much of this tooling is, we didn't want dependencies on Visual Studio for non visual studio tasks. Similarly, we will be removing most database dependencies, not because we don't think databases are awesome, but to reduce complexity.
Thinking about our Info Rhino Process Automation for deployment management
Naturally, a balance is needed between too much lower level deployment automation and the enterprise ready more commercial offerings.
We see our Just-in-Time approach to deployment automation a worthy addition to any organisation serious about streamlining its SDLC.
We could think of JIT more realistically as - our release automation process is chaotic, time to sort it out.
It isn't for us to tell clients how to set up their SDLC, the one thing which always kills productivity is forcing technology teams to all use a single toolkit or approach. Indeed, centralisation versus decentralisation is an ongoing debate. We say, use what is the path of least resistance and centralise as complexity increases, but never overburden teams with complexity just for the sake of centralisation.
IRPA is a great solution to do that.