Analyzing Technical Processes for GCP
Analyzing Technical Processes for GCP
Architects are involved in many different types of technical processes:
- Continuos Deployment
- Continuous Delivery
- Post-mortem Analysis
- Development Lifecycle Planning
- Business continuity
- Disaster Recovery(DR)
Here we will discuss these processes in relation to business needs and goals. We will learn to focus on and define these processes rather than simply follow them.
Software Development Lifecycle(SDLC)
The Software development lifecycle are the steps that software, and those who engineer it, goes through from beginning to end to create and host a service. This includes 12 phases. In some cases these are collapsed or combined to fewer, or some are regarded as pre-SDLC steps.
- Scope Analysis
- Requirements Analysis
- Integration & Testing
|Software Development Lifecycle - Wikipedia|
Every phase does work that is required to produce quality software. It is a cycle because you reiterate over these steps until the software is no longer used. The process could start over after the Maintenance step and begin at any one of these beginning steps. After a software is deployed, the next iteration of that software could be as complex as having another Proposal created by the captains of those duties. Or 2nd time the process iterates, it loops back to the Development phase depending if the next iteration requirements are already known. Proposal, scope analysis, planning and requirements analysis can even be done by non-developers or teams of analysts.
For this reason we're going to jump right into Planning.
Planning is a step performed by the Project Manager, they'll create all the spaces which track work, all the spaces where the documentation, solution architect design document, specifications and roadmaps will live. They'll create the roadmap for the different project phases. They'll create the templates for spring planning, sprint retros, creation of overarching tasks often called 'epics'.
This may be done by developers and architects together. The goal is to fully understand the needs and wants of the proposal and find potential ways to meet them. The problem is discussed and ideas are put together to meet those problems. Here the solutions are not designed but considered. Any spikes that are needed to sus out requirements are performed by developers or other engineers. A spike is a short development period where a developer tries a feature to come to some knowledge required for planning a full fledged effort to achieve those requirements in the context of existing systems. Spikes are often isolated to proof of concepts. Proof of concept projects might exist here and iterate back to requirements for an actual project.
In this phase you're trying to:
- Grasp the scope of the needs and wants of the proposal
- Track and assess of all possible solutions
- Evaluate the cost benefits of the different paths toward a solution
Understanding the scope is a matter of both knowledge of the domain in question: if a mail problem, familiarity with mail operations and development; it is also a matter of systems and software knowledge of the existing infrastructure. Domain knowledge, for example, is knowing that kubernetes secrets are not very secure. Systems and software knowledge, is knowing where you'll inject and use the google libraries to fetch secrets from GSM. This is precisely why developers, architects, and reliability engineers all engage together in this phase.
When finding solutions for your problem, you need to be able to filter them out without trying them. The solutions you're filtering in your search are those that aren't feasible, do not fit your use case, or don't fit within your limitations. Once you know the limits of the project, you can search for possible solutions. If your Google Secret Manager project has a limit placed on it that it must work for in-house apps and third party apps, the direction you'll go into will be wildly different than if you weren't filtering based on this rubric. You'll also consider if commercial software meets your needs at a better cost than you can.
Purchased or Free and Open Source Software(FOSS) can meets a wide range of use cases faster than developing something new. They also have the benefit of the ability to focus on other easier to solve problems. Purchased software or purchased FOSS support can help offset the costs of provisioning new services. This disadvantages are potential licensing models and costs and being locked into a feature set that doesn't evolve with your needs.
You can decide to build from scratch, from a framework, or from an opensource project. There are different considerations with each of these. How much modification does ready made software require, what languages and formats does it exist in, do you have to acquire talent to work with it. Consider the lifecycles of the software you use. For instance, if you build docker images from other images, knowing the release cycles of those will help you be able to create new releases at the time new operating systems are released. Paying attention to the popularity and maintainers of the application can help you know if a project has become deprecated. You can avoid deprecated software if you do not want to deal with becoming the new maintainer or updater to the software within your use of it. Or you could choose actively maintained software to fork and modify so that you can roll in security backports from the upstream project into your project.
Scratch allows for full control but involves the most work, most maintenance, most planning, having the team with the talent and skillsets needed, most resolution of issues.
Once you have several viable solutions to consider, spike the one first with the greatest cost benefit. You'll know this because you can do a Cost Benefit Analysis on all these options we've discussed.
Cost Benefit Analysis
Part of Analysis is the cost benefit analysis of meeting the requirements with your various solution options. When asked to justify the decisions in your project you'll be asked for this and be able to contrast the different values of each solution. As part of this you'll calculate the ROI for the different options to arrive at the solutions value. At the end of this phase you'll decide which solutions you'll pursue in the Design.
As part of the design phase, you'll plan out how the software will work, the structure of the schemas and endpoints, and the functionality that these will achieve. This phase starts with a high level design and ends in a detailed one.
The High Level design is an inventory of all the top levels of parts of the application. Here you'll identify how components will interact as well their overarching functions. You might work up UML or mermaid diagrams describing parts and interactions.
The Detailed design is a plan of implementation of each of these parts. These parts will be modularized in though and broken down into the most sensible and efficient anatomies in which for them to exist. Some of the things planned include, error codes or pages, data structures, algorithms, security controls, logging, exit codes and wire-frames for user interfaces.
During the design phase, its best to work directly with the users of the system as you would work with other disciplines during other phases. The Users of a system will have a closer relationship to the requirements. In this phase developers will choose which frameworks, libraries, and dependencies.
Develop, Test, Implement
Under development, software is created by engineers and built as artifacts which are pushed to a repository. These artifacts are deployed into an operating system either with a package manager, ssh, direct copying, a build process or via
Dockerfile commands. Artifacts can have within them code, binaries, documentation, configuration, or raw mime/type files.
In this phase developers might use tools like 'VSCode', analysis applications, administration tools; while changes are committed with source control tools that have gitOps attached to them. All these processes are in the domain of an Architect to conceive and track when designing a project.
Developers will also test as part of the commands they give the continuous integration(CI) system. Well before the CI steps are created, the developer has created unit and integration tests and knows the commands to run them so that the automation team can include them in the creation of the CI portion of the development operations. There are language specific unit tests but generally the integration tests the API endpoints and you have a choice of software for that.
Documentation is crucial to the SDLC because it lets others using the software know how to operate the software. This is often your DevOps team handling automation in deployments. Developer documentation can be in the form of inline comments within the code, but also developers should release a manual as a README.md file in the source control repository root. A
README.md file should exist in every folder where a different component has different usage instructions.
You entire solution architecture design should be documented. For a lot of companies this is a page in a intranet wiki like Confluence.
This is the practice of keeping the software running and updated. In Agile software practices, developers maintain code and run deployment pipelines to development environments which graduate to higher environments. In a fully agile environment, automation engineers create the pipelines but an automation release team approves the barriers so that developer initiated deployments can be released to production under supervision during a release window.
Keeping a service running includes logging, monitoring, alerting and mitigation. Some of this work includes log rotation and performance scaling. Developers control log messages but infrastructure developers like cloud engineering teams might create the terraform modules that automation engineers use to automatically create alerts and logging policies.
Continuous Integration / Continuous Delivery(CICD)
Continuous integration is the practice of building code every time there is a change to a code base. This usually starts with a commit to a version control system. If the branch or tag of the commit is part of the rules for the continuous part, then the integration part will take place automatically. Integration pipelines often have built, test, and push steps.
Continuous deployment is often the practice of deploying new artifacts as soon as they are available. If a repository's continuous integration settings builds a package and places it in the repo, continuous deployment systems polling for new artifacts may trigger a deployment pipeline when it finds one. So once a new version is added to nexus or a deb-repository, CD systems often send that artifact down the line.
The cornerstone of CI/CD is that individual features can be added quickly, unlike the past's methods which had to weave several new features together into a major release. Instead, new features are built on different feature branches, those feature branches have builds, those builds can be deployed quickly and then once tested the feature branch can be merged into one of the trunks. The version control system acts as an integration engine which takes all these features and incorporates them together, if you're using trunk based development. In the context of hosted services, users get a risk free but up-to-date experience.
CI/CD is testing heavy. In real life production pipelines, tests are over 50% or more of the pipeline steps and is used throughout the workflows as steps. Automated tests allow the test cases to pass or fail without human intervention. This means that services can be tested with scripted steps, and then deployed if those steps succeed. This prevents deployments and the building of artifacts that do not pass tests.
In certain critical cases, continuous deliver isn't possible as the safety risk is too high to deploy the latest code. Sometimes code needs to be hand certified and hand installed.
The foundation of Continuous Deployment / Continuous Integration is Version control of software source code. When developers checkout code to work on it and improve it, they get it from a git repository. They make their changes, and push them to the git repository. Git makes a revision and keeps both copies. Points in time in the revisions are called references. Branches and tags are references. You can merge two disparate code bodies by merging two references. A request to merge two references is caled a pull request. So to merge one branch like
develop you'd create a pull request from my 'feature branch' into the 'trunk' which in this case is
This is how basic version control works with source code. When you commit, often the repository server will notify listening services that code is updated. Those services will look at the repo and if they find a build instruction file they will do the steps listed in the file. This way when we want to build our software, we put all the means to do it in that file. When new commits are made to the repo, listeners will build the application based on our instructions.
If there are no code updates, listeners, or build instructions, there is no continuous part and no integration is happening. In the ancient software world, a developer would commit code and send an integration engineer release notes in an email and the integration engineer would run and babysit a build script while the developer went and got coffee. Now the developer makes a commit and then watches a job console with output logs from the build without communicating with other engineers... they still get coffee while the build runs.
Fixing Incident Culture
Enterprise IT Processes
Business Continuity & Disaster Recovery