Student Spotlight: Carmen Mays of Elevators!

We recently caught up with Carmen Mays, Founder and CEO of Elevators, to ask her about the impact that her recent Scrum Master and Product Owner training has made in her professional life.

First, a little about her company:

“Elevators fosters equitable entrepreneurial ecosystems by developing innovative corporate supply chain strategies. We build community and capacity for melanated creatives and institutions integral to the entrepreneurial ecosystem.

I started Elevators with the simple premise of ensuring creatives of color had the tools and opportunities they needed to make a living doing what they love. Elevators has successfully engaged creatives in culturally relevant business development. Elevators also creates corporate engagement and supply chain strategies that foster equitable entrepreneurial ecosystems.”

You can find out more about her company, Elevators, right here.

Since this was a short interview format, I’ll just list the questions along with her responses below!

Q: How has the Scrum Master and Product Owner training changed the mindset with which you approach your business?

I am much more clear about how to introduce new ideas and refine existing ones. I am able to go deeper into the structure of the products thereby making them more effective.

Q: Have you incorporated any of the training into your workflow yet?

Yes! Particularly the Kanban Board.

Q: Have you seen any immediate benefits to bringing these thought processes into your business?

I’m much more productive and because I can see what I’ve completed, I don’t overwork.

Q: Do you think LSM/LSPO training would be helpful for other startups?

Absolutely!

Q: Would it be an added benefit for all members of a team to have this training (so that everyone understands it and are on the same page)?

Yes! As with any training, it’s for everybody.

If you are interested in learning more about our LSM (Scrum Master) and LSPO (Product Owner) training classes then head over to our class page to get the details. Currently, those living near Birmingham, AL might even be eligible for a full scholarship to attend!

Student Spotlight: Maureen Sears of HC3!

We recently caught up with Maureen Sears, Project Manager at HC3, to ask her about the impact that her recent Scrum Master and Product Owner training has made in her professional life.

HC3 is a data-driven tech company delivering customer communications for their clients. By managing complex data generated from multiple client systems, they help financial service organizations communicate with their customers in meaningful ways. HC3 offers focused solutions for statement and notice redesign, intelligent marketing campaigns, and seamless delivery of both print and digital communications. Through these solutions, HC3 empowers financial service organizations to give their customers a fully customizable document experience.

You can find out more about her company, HC3, right here.

Since this was a short interview format, I’ll just list the questions along with her responses below!

Q: How has the Scrum Master and Product Owner training changed the mindset with which you approach your business?

As a team, our Project Management team has adopted a more fluid perspective on how we handle implementations. Our improved mindset allows us to more easily adapt and grow with challenges that come our way.

Q: Have you incorporated any of the training into your workflow yet?

One of the most important pieces of training that we have incorporated into our workflow is the feedback loop.

Constantly sharing ideas between other departments and getting feedback from our clients has enabled us to deepen our understanding of other departments to create a better experience for our clients.

Q: Have you seen any immediate benefits to bringing these thought processes into your business?

Our digital solution HC3 offers our clients is constantly evolving and improving, so the implementation process is constantly evolving and improving at the same time.

We have been able to refine the process and tool more rapidly than ever before by including a constant internal and external feedback loop.

Q: Do you think LSM/LSPO training would be helpful for other startups?

This training did a great job of building the foundation for working in the Scrum world. It would be extremely beneficial for a startup because it is easier to get everyone’s commitment to building a new process instead of changing an existing one.

Q: Would it be an added benefit for all members of a team to have this training (so that everyone understands it and are on the same page)?

We were fortunate enough to have members of our project management team, as well as the manager of the development team, participate in the training. Having everyone together to think about how we could apply this knowledge in our work environment and bounce ideas off each other during the course was invaluable.

If you are interested in learning more about our LSM (Scrum Master) and LSPO (Product Owner) training classes then head over to our class page to get the details. Currently, those living near Birmingham, AL might even be eligible for a full scholarship to attend!

Student Spotlight: Ethan Summers of Fledging!

We recently caught up with Ethan Summers, Commercial Operations Lead with Fledging, to ask him about the impact that his recent Scrum Master and Product Owner training has made in his professional life.

Fledging is part of Birmingham, AL’s rapidly growing tech startup community and they are doing some awesome things with product design and release in the portable SSD space. If you have a need for fast, secure, portable, and stylish storage, check them out here.

Since this was a short interview format, I’ll just list the questions along with his responses below!

Q: How has the Scrum Master and Product Owner training changed the mindset with which you approach your business?

We’re building into everything. We’re a startup in an industry full of tech titans with huge bank accounts, so all we really have is our creativity and agility. Scrum is helping be a lot more nimble. One major way is our Product process. We’ve rebuilt how we evaluate product concepts so that we get to “Yes” or “No” faster and in a validated way. 

Q: Have you incorporated any of the training into your workflow yet?

We’re right in the middle of that right now. The Product example above is one good example. But we’re trying to use it for everything. We don’t see any domain as off limits, even the “perpetual work” domains like our Production team, because they can run process improvement sprints or even decide to treat a whole month, with all its Production demands, as a kind of sprint. We’ve also started using concepts like the Daily Roundup right away. 

Q: Have you seen any immediate benefits to bringing these thought processes into your business?

We have! The Daily Roundup came at the perfect time during the quarantine. It would be really easy to lose touch with each other. Instead, we implemented Daily Roundups and now we all touch base at the beginning and end of each day to discuss what we’re working on, expected roadblocks, and so on. Our team’s reporting a lot of satisfaction both personally and professionally with this process.

Q: Do you think LSM/LSPO training would be helpful for other startups?

As long as they’re not a competitor, absolutely! But it would be a terrible waste of time for our competitors. 

Q: Would it be an added benefit for all members of a team to have this training (so that everyone understands it and are on the same page)?

Yes, I think so. Our CEO already has the training. Our CSO and I went through the same class. We have a Software Engineer going through it right now and our Head of Operations goes in May. Frankly, I think everyone on our team should do it. 

If you are interested in learning more about our LSM (Scrum Master) and LSPO (Product Owner) training classes then head over to our class page to get the details. Currently, those living near Birmingham, AL might even be eligible for a full scholarship to attend!

Azure Devops Build and Release Agents With Docker

Azure Devops offers a great pipeline feature for automating the build and release process. After we realized Microsoft’s hosted agents took more time than we wanted to gather all the required assets for our build, we decided to investigate building our own agents to run the process (Microsoft offers the ability to host your own agents).

Since we use .NET Core for most of our projects, we can’t use Azure container management because it doesn’t support Windows containers. Instead, we created Windows containers on a native Windows machine with Docker, which allows us to host our build agents with Docker running on a Windows virtual machine in Azure.

In this post, we’ll walk through the whole process of creating a custom build agent with Docker. Microsoft has its own solution on GitHub, which was the basis for this project. All the files mentioned in this post are included at the bottom of this page.

Creating the Dockerfile

The dockerfile specifies the contents of our container image, so it’s important to include everything the build agent will need. I’ll highlight the main elements of the dockerfile in this section, but you can find the entire dockerfile at the bottom of the page.

The first element of every dockerfile is what core image the new image will be based on. Since we need a Windows image to install .NET Core, we will base it on Windows server core.

FROM microsoft/windowsservercore:10.0.14393.1358
ENV WINDOWS_IMAGE_VERSION=10.0.14393

We need to install several components on these servers, and we’ve found that chocolatey is a convenient package manager that will allow us to run clean installs. We can invoke a PowerShell command to download and run the chocolatey install script, which will allow us to set the default choco configuration.

RUN @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
RUN choco config set cachelocation C:\chococache

Once chocolatey is installed, we can use it to install all our dependencies, most of which are shown in the list below. You can also add anything else you may need. Most things have a chocolatey package associated with them, and you can find specific instructions to install them on the their site.

RUN choco install \
    git  \
    nodejs \    
    curl \
    docker \
    dotnet4.6.1 \
    --confirm \
    --limit-output \
    --timeout 216000 \
    && rmdir /S /Q C:\chococache

Most build servers I’ve seen rely on a full copy of Visual Studio to be installed, but I found that installing Visual Studio Build Tools is really all that is needed. Here we install build tools the same way the we installed the other packages with choco.

RUN choco install \
    visualstudio2017buildtools

Now all we have left to install is .NET Core–which can’t be done with chocolaty, so we have to invoke a web request to download the installer. Once the installer is downloaded, we can extract the files and remove the no longer needed zip file.

# Install .NET Core
ENV DOTNET_VERSION 2.1
ENV DOTNET_DOWNLOAD_URL https://download.visualstudio.microsoft.com/download/pr/ce443d89-75f1-4122-aaa8-c094a9017b4a/255b06ace4207a8ee923758160ed01c3/dotnet-runtime-2.1.5-win-x64.zip

RUN Invoke-WebRequest $Env:DOTNET_DOWNLOAD_URL -OutFile dotnet.zip; \
    Expand-Archive dotnet.zip -DestinationPath $Env:ProgramFiles\dotnet -Force; \
    Remove-Item -Force dotnet.zip
    
# Install .NET Core SDK
ENV DOTNET_SDK_VERSION 2.1
ENV DOTNET_SDK_DOWNLOAD_URL https://download.visualstudio.microsoft.com/download/pr/28820b2a-0aec-4c24-a271-a14bcb3e2686/5e0ad8ae32f1497e8d0cace2447b9e01/dotnet-sdk-2.1.403-win-x64.zip

RUN Invoke-WebRequest $Env:DOTNET_SDK_DOWNLOAD_URL -OutFile dotnet.zip; \
    Expand-Archive dotnet.zip -DestinationPath $Env:ProgramFiles\dotnet -Force; \
    Remove-Item -Force dotnet.zip

The last thing we need to do is download the tools to connect our agent to the Azure Devops agent pool. If you navigate to your agent pools in Azure Devops, you will find a “Download Agent” button, where you can copy the “Download the Agent” link to download a zip file with all the tools needed to connect our agent to the pool.

Once we have the link, we can add another step to our dockerfile to download and extract this zip file in our container image. We will create a new directory for the build agent files and extract the file there, then remove the zip file.

#Install Agent
RUN mkdir C:\BuildAgent;

ENV VSTS_ACCOUNT_DOWNLOAD_URL ""

RUN Invoke-WebRequest $Env:VSTS_ACCOUNT_DOWNLOAD_URL -OutFile agent.zip; \
    Expand-Archive agent.zip -DestinationPath c:\BuildAgent -Force; \
    Remove-Item -Force agent.zip

We want these agents to be fully automated, so we need a script that will configure and connect our agent to the agent pool when the docker container is started. We’ll take a detailed look at creating this script in the next section, but go ahead and add these steps to the end of the dockerfile. This makes the build agent directory we just created the working directory and copies our start scripts into that directory. When the container is started, it will run the start.cmd file.

WORKDIR C:/BuildAgent
COPY ./start.* ./
CMD ["start.cmd"]

Creating the Start Scripts

The start script is what actually connects the agent to Azure Devops, making use of the tools we downloaded from the agent pool earlier.

Before we make the script, we’ll create a personal access token for the agent to use for authentication. To create one, click on your user profile at the top right of Azure Devops, then select the security tab, navigate to Personal Access Tokens, and choose “New Token.” You can name the token and specify the privileges required. For build agents, it will only needed to read and manage the agent pools. Once you finish the creation, you’ll have a token that can be saved for use in the start script.

We now can create the PowerShell script start.ps1. First, we add the variables we will be using for the script. We’re declaring environment variables at the beginning of the script. If you would rather, you can also just pass environment variables into the container when starting it with Docker.

$env:VSTS_ACCOUNT = ""
$env:VSTS_TOKEN = ""
$env:VSTS_POOL = ""
    VTST_ACCOUNT is the name of your organization (e.g., VSTS_ACCOUNT.visualstudios.com or dev.azure.com/VSTS_ACCOUNT)
    VSTS_TOKEN is the personal access token that we just generated
    VSTS_POOL is the name of the agent pool that should be joined; if left blank, the agent will be added to the default pool

Refer to the files at the end of the page for the full PowerShell script.

We now need to create a simple star.cmd file to trigger our PowerShell script. In our dockerfile, we already set this script to run every time the container image starts.

PowerShell.exe -ExecutionPolicy ByPass .\start.ps1

Starting the Docker Containers

After all the configuration is done, we’re ready to build our custom docker images. We need to create a directory with all the files we’ve created so far: start.ps1, start.cmd, and dockerfile. Note that dockerfiles have no extension.

Now open PowerShell and navigate to the newly created directory and run the following docker command. Be sure to name your image here.

docker build -t "<imgename>" .

The build will take a little while as the dockerfile installs all the components–especially the .NET Core step, which may appear to be hung up. Be patient, and the installer will finish its work. Once the image has been created, it will output “Successfully tagged imagename:latest” in the console.

We can now start our container with the newly created docker image.

docker run -it -d --restart always --name "" --hostname "" imagename:latest

I configured my start script to take the host-name of the docker container and use it as the name of the agent in the Azure Devops agent pool. Docker randomly generates a hostname for each container, so I pass the host-name in as a parameter here so I could give each agent a specific name. This is also where you would want to pass in the environment variables if you did not declare them in the start.ps1 file. There are many more Docker run arguments that can be found in the Docker documentation.

To see the running Docker containers, issue a “docker ps” command in the console. We should also be able to see the newly created agent in the specified agent pool in Azure Devops.

If you don’t see your agent here after a few minutes, try restarting the Docker container and connecting to it with PowerShell. This will allow you to run the start.ps1 file manually to see if there is an error with the configuration.

Conclusion

We can now start as many build and release agents as we need from this Docker image. If future projects require other dependencies on the build servers, we can simply edit the dockerfile and rebuild the image. In our use case, we found that the build agents used more CPU power than RAM, so we were able to select a compute optimized server with several cores and found we were saving minutes on our build and release process.

Project Code

dockerfile

FROM microsoft/windowsservercore:10.0.14393.1358
ENV WINDOWS_IMAGE_VERSION=10.0.14393

ENV chocolateyUseWindowsCompression=false

RUN @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

RUN choco config set cachelocation C:\chococache

RUN choco install \
    git  \
    nodejs \    
    curl \
    docker \
    dotnet4.6.1 \
    visualstudio2017buildtools \    
    azure-cli \
    azurepowershell \
    --confirm \
    --limit-output \
    --timeout 216000 \
    && rmdir /S /Q C:\chococache

# common node tools
RUN npm install gulp -g && npm install grunt -g && npm install -g less && npm install phantomjs-prebuilt -g

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]

# Install .NET Core
ENV DOTNET_VERSION 2.1
ENV DOTNET_DOWNLOAD_URL https://download.visualstudio.microsoft.com/download/pr/ce443d89-75f1-4122-aaa8-c094a9017b4a/255b06ace4207a8ee923758160ed01c3/dotnet-runtime-2.1.5-win-x64.zip

RUN Invoke-WebRequest $Env:DOTNET_DOWNLOAD_URL -OutFile dotnet.zip; \
    Expand-Archive dotnet.zip -DestinationPath $Env:ProgramFiles\dotnet -Force; \
    Remove-Item -Force dotnet.zip
    
# Install .NET Core SDK
ENV DOTNET_SDK_VERSION 2.1
ENV DOTNET_SDK_DOWNLOAD_URL https://download.visualstudio.microsoft.com/download/pr/28820b2a-0aec-4c24-a271-a14bcb3e2686/5e0ad8ae32f1497e8d0cace2447b9e01/dotnet-sdk-2.1.403-win-x64.zip

RUN Invoke-WebRequest $Env:DOTNET_SDK_DOWNLOAD_URL -OutFile dotnet.zip; \
    Expand-Archive dotnet.zip -DestinationPath $Env:ProgramFiles\dotnet -Force; \
    Remove-Item -Force dotnet.zip

#Install Agent
RUN mkdir C:\BuildAgent;

ENV VSTS_ACCOUNT_DOWNLOAD_URL "<agentdownloadurl>"

RUN Invoke-WebRequest $Env:VSTS_ACCOUNT_DOWNLOAD_URL -OutFile agent.zip; \
    Expand-Archive agent.zip -DestinationPath c:\BuildAgent -Force; \
    Remove-Item -Force agent.zip

SHELL ["cmd", "/S", "/C"]

RUN setx /M PATH "%PATH%;%ProgramFiles%\dotnet"

# Trigger the population of the local package cache
ENV NUGET_XMLDOC_MODE skip

RUN mkdir C:\warmup \
    && cd C:\warmup \
    && dotnet new \
    && cd .. \
    && rmdir /S /Q C:\warmup 

WORKDIR C:/BuildAgent

COPY ./start.* ./
CMD ["start.cmd"]

start.ps1

$ErrorActionPreference = "Stop"
$env:VSTS_ACCOUNT = ""
$env:VSTS_TOKEN = ""
$env:VSTS_POOL = ""

If ($env:VSTS_ACCOUNT -eq $null) {
    Write-Error "Missing VSTS_ACCOUNT environment variable"
    exit 1
}

if ($env:VSTS_TOKEN -eq $null) {
    Write-Error "Missing VSTS_TOKEN environment variable"
    exit 1
} else {
    if (Test-Path -Path $env:VSTS_TOKEN -PathType Leaf) {
        $env:VSTS_TOKEN = Get-Content -Path $env:VSTS_TOKEN -ErrorAction Stop | Where-Object {$_} | Select-Object -First 1
        
        if ([string]::IsNullOrEmpty($env:VSTS_TOKEN)) {
            Write-Error "Missing VSTS_TOKEN file content"
            exit 1
        }
    }
}

if ($env:VSTS_AGENT -ne $null) {
    $env:VSTS_AGENT = $($env:VSTS_AGENT)
}
else {
    $env:VSTS_AGENT = $env:COMPUTERNAME
}

if ($env:VSTS_WORK -ne $null)
{
    New-Item -Path $env:VSTS_WORK -ItemType Directory -Force
}
else
{
    $env:VSTS_WORK = "_work"
}

if($env:VSTS_POOL -eq $null)
{
    $env:VSTS_POOL = "Default"
}

# Set The Configuration and Run The Agent
Set-Location -Path "C:\BuildAgent"

& .\bin\Agent.Listener.exe configure --unattended `
    --agent "$env:VSTS_AGENT" `
    --url "https://$env:VSTS_ACCOUNT.visualstudio.com" `
    --auth PAT `
    --token "$env:VSTS_TOKEN" `
    --pool "$env:VSTS_POOL" `
    --work "$env:VSTS_WORK" `
    --replace

& .\bin\Agent.Listener.exe run

start.cmd

PowerShell.exe -ExecutionPolicy ByPass .\start.ps1

 

Environment settings in an Angular CI/CD build process

The Problem

When building large-scale Angular applications, most people eventually need to provide their application with environment-specific variables, which doesn’t seem like a very big deal on the surface.  After all, Angular provides us with the “environment.ts” file, right?

It should be as easy as filling in your environment settings and then using the environment variable throughout the site, but this method introduces a level of uncertainty into the build process that may not be acceptable for all applications.

This uncertainty comes primarily from the need to rebuild the application before those new variables are accessible.  For example, when changing from a Dev environment to a Test environment, the build process is at the mercy of hundreds of code packages that may or may not have updated since the last environment promotion.  This can cause time-consuming build errors that aren’t even related to the quality of your code.

In some instances, it may be possible to avoid these errors by locking down dependencies with something like “npm shrinkwrap.” However, due to the complexity of the Angular CLI build process, this approach still allows for the potential of different outputs. Even if you manage to get consistent outputs, it’s still time-consuming to constantly rebuild the application in a build pipeline that could potentially get backed up from rapid promotions.

The Solution

Before the application loads, retrieve a CI/CD generated settings file from the server, which allows you to make sure that an unchanged code base can get up to date settings no matter the environment. Here’s how our process works:

Create the configuration service

First, we create a service in one of your top-level Angular modules, typically the App Module or, if applicable, the Core Module.  You can also create this service in any non-lazy loaded module that provides services to the application.

@Injectable({
    providedIn: 'root',
})
export class AppConfigService {

    public settings: AppSettings;
    get WebUrl() {
        return location.protocol + '//' + window.location.host + '/';
    }
    constructor() {}
    public load() {
        return new Promise((resolve, reject) => {
            const self = this;
            const xhttp = new XMLHttpRequest();
            xhttp.onreadystatechange = function () {
                if (this.readyState === 4 && this.status === 200) {
                    const data = JSON.parse(this.responseText);
                    const jsonConvert = new JsonConvert();
                    self.settings =  jsonConvert.deserializeObject(data.BaseSettings, AppSettings);
                    resolve(true);
                }
                if (this.status === 404) {
                    reject('Server cannot find Appsettings.js file');
                }
            };
            xhttp.open('GET', 'appSettings.json', true);
            xhttp.send();
        }).catch(error => {
            console.error(error);
        });
    }
}

The “load” function is where we get our settings.  Unfortunately, we can’t use Angular’s built in HttpClient because it forms a circular reference with our authentication service, but depending on your project structure, you may be able to simplify this request with built-in Http functionality.

After we receive the data, we use an external library (Json2Typescript) to convert the settings JSON into a typescript class so we can include helper functions if necessary. We then assign this object to the “settings” property on this service to be used elsewhere.

Create the appSettings.json file

This file structure is entirely up to you and your preferred CI/CD tool, but here’s an example for reference.

{
  "BaseSettings": {
    "ApiUrl": "http://localhost:99999/api/"
  }
}

Include the appSettings.json file as an asset

Angular needs to be informed specifically about which files are assets so it includes them in the build process. We place our appSettings.json in the root “src” folder of our web application. Because of this, our assets list inside of the angular.json file looks like this:

"assets":[
     "src/favicon.ico",
     "src/assets",
     "src/appSettings.json",
     "src/web.config",
 ]

If you place the settings in a different folder, you’ll need to link directly the file, rather than the folder location (e.g., “src/assets/appSettings.json” rather than “src/assets/”).

Tell the app to load settings before launch

The last step on the client side is to make sure the application knows to load the settings before anything else initializes. This is done through Angular’s built-in APP_INITIALIZER function. Below is an example of our core module using the APP_INITIALIZER to load settings.

export function loadConfig(config: AppConfigService) {
  return () => config.load();
}

@NgModule({
  imports: [
    CommonModule,
    HttpClientModule
  ],
  declarations: [MainNavComponent],
  providers: [
    {
      provide: APP_INITIALIZER,
      useFactory: loadConfig,
      deps: [AppConfigService],
      multi: true
    }
  ]
})
export class CoreModule { }

Your factory function should return a function that returns a promise. As you can see, we’ve injected our App Config service and returned our “load” function, which will return a promise. Once that is completed the rest of the application will be allowed to load, and the configuration service, with settings, will be available throughout the application.

Configure the build service

At this point, your angular application should be set. The only thing left to do is configure the release service to change the appSettings.json for every environment, and your application should retrieve them properly when it starts. We use Azure DevOps build/release to update the appSettings.json file, but you can use any release-management tool that supports updating json files.