itutor Agile Evangelist

  1. What is scrum? Scrum is iterative and incremental agile software development methodology to manage product development.
  2. Why do we need scrum instead of traditional waterfall? Scrum has many advantages. To name a few :- a. If the client is in the ‘vision’ phase and his needs and goals are not clearly defined, this process works best for client-team connection. We start with scrum and then the client’s requirements will likely gradually clarify as the project progresses, and development can easily be adapted to meet these new, evolving requirements. b. Intense team collaboration is required in all iterations, so delivery is a team effort.
  3. What is empirical process control in agile? Scrum uses the real-world progress of a project, not a best guess or uninformed estimates, to plan and schedule releases.There are 3 aspects of empirical process control:-                                   a. Visibility [ Any aspect of project that affects the outcome needs to be visible to every stakeholder of the project i.e. Team]                                               b.Inspection – Aspects that are visible are frequently evaluated so that the variances are visible to everyone c. Adaption – Adjust the process if one or more aspects are in unacceptable range i.e not affecting the outcome.
  4. How long does scrum last? Project is divided in to 2 week sprint. At the end of the sprint all stakeholders meet to assess the progress and plan its next steps. The projects estimation is based on completed work per sprint and not based on prediction.

Daily Sprint Meeting:-

Team meets for daily sprints.

Summary/ Training Agile / Jira

1.Quick intro video from Atlassian to get you excited:-

  1. What is agile and how jira/ confulence implements agile – walk through from agile evangelist perspective

  1. Jira Architecture:-

  1. Dive deep:-

  1. Data Center implementation – Replication if there are a lot of instances of Jira running
  1. Plugins for deployment , documentation etc.
itutor Agile Evangelist

Security Compliance as a code – Streamlining audit and compliance rule-based process

Security Compliance as a code – Streamlining audit and compliance rule-based process

Seldom do we venture into the area of compliance as a code. It’s about time we gave it a thought:-


a) Does compliance really adhere to compliance as far as it is related to the infrastructure that implements it?
b) Dealing with Excel spreadsheets – filling and updating
c) Cumbersome software that is used for filling up forms
d) Verifying and validating 100s of assessment data questions, ambiguous in nature, without an automated check on each.
e) Emails – back and forth as against instant data on dashboards/reports
f) Time spent on waiting on email responses as against instant data on dashboards/reports
g) Compliance implementation only at start or end of the release cycle.
h) No method for automated validation of the rules- run those unit, resource, integration tests at the system level.

Proposed Solution:-

Why don’t we step back and think for a moment?

Let’s build a smart system that can do all of the above tasks and a system that you can trust to have the right answers:-

Devops – Compliance as a code + Security at various layers + System level checks

We don’t really need to ask someone for the level of debugging they have, for firewall details, etc? A couple of reports from the system is all we need.


a) Changes to compliance ( rules for the process) goes through SCM such as git/ svn etc. with auditing enabled.
b) Workflow – Addition or modification of a rule goes right into coding/implementation phase at various module levels ( apache2,MySQL, oracle, Siebel , SAP, etc ) in the system.
Chef Compliance:-

It enables you to report on the state of our servers, checking to ensure compliance, the level of security.
So if we build this product, package it correctly and provide it to clients to install it and maybe build a dashboard/admin panel around it for
different internal/external clients that we serve – voila, do we have an alternate revenue stream here 🙂 !!

This is where inspec comes handy to write “custom made code requirements, you can change on the fly.
Does the infrastructure really comply? We agreed to some rules, let’s see how well it implements it.

Implementation steps:-

a) Code the rules

Eg:- Ruby based packages (greater than version 2.0 preferably)

describe package(‘telnetd’) do
it { should_not be_installed }

describe inetd_conf do
its(“telnet”) { should eq nil }

Test the rules once deployed on the system:-

TestKitchen – configure kitchen.yml
Deployment :-

Use the docker instance and save the image :-

docker pull chef/inspec
alias inspec=’docker run -it –rm -v $(pwd):/share chef/inspec’
d) Run Inspec on remote hosts:-

$ inspec –help
inspec archive PATH # archive a profile to tar.gz (default) …
inspec check PATH # verify all tests at the specified PATH
inspec compliance SUBCOMMAND … # Chef Compliance commands
inspec detect # detect the target OS
inspec exec PATH(S) # run all test files at the specified PATH.
inspec help [COMMAND] # Describe available commands or one spe…
inspec init TEMPLATE … # Scaffolds a new project
inspec json PATH # read all tests in PATH and generate a …
inspec shell # open an interactive debugging shell
inspec supermarket SUBCOMMAND … # Supermarket commands
inspec version # prints the version of this tool

$ inspec exec test.rb -t ssh://user@hostname

# run test on remote windows host on WinRM
inspec exec test.rb -t winrm://Administrator@windowshost –password ‘your-password’

# run test on docker container
inspec exec test.rb -t docker://container_id

REF for multi-profile inheritance and reusable tests:-

Security Compliance as a code – Streamlining audit and compliance rule-based process

DevOps – Python/Shell Automated Scripts

Python 3:-
Data Cleaning, Preaparation for analysis, for automation, for webapps development( python with flask).
Useful Python libraries: attrs (,nltk, sh ( caling externally ), behold (for debugging) on 3.0 version.

psutil is a cross-platform library for retrieving information onrunning processes and system utilization (CPU, memory, disks, network)in Python.


We use tool bokey for slicing and dicing data from disparate sources.

Ref –


DevOps – Python/Shell Automated Scripts

DevOps – Jenkins – Continuous Integration

Jenkins:- Continuous Integration written in Java – To build and test after git checking in to a non developer server (CI) continuously.
– initiate/trigger build systems
– keep track of git changes
– deploy in to CI env
– notify/alert developers of the build success
– automatic build kickoff notification on each git commit
– Distributed builds: Jenkins can distribute build/test loads to multiple computers just by installing slaves on those servers
– Hudson can take source code automatically once the Maven plugin is installed and POM checked in.
– Types of JObs- freestyle, maven, monitor jobs (Cron), multiconfiguration jobs (run same build on different configs)
– ManageJenkins — Manage Plugins – Available– Integrated with plugins – Git, Checkstyle,PMD, FindBugs, Code Complexity, Sonar.
– Execute shell scripts during configuration
Run with with Selenium file and testng (for annotations @Test, @Beforeclass, etc.)
1. Developers unit test the changes locally and check in files to the git.
i) Having Scheduled Builds. (Daily or Hours)
i) Checkout the code.
ii) Compile the Code.
iii) Running Unit Test-Casess.
iv) FTP code to different hosts
v) Deploy the artifacts.
2. Developers make sure that config changes and POM Artifact is checked in to git.
3. on git commit , job is kicked off by jenkins and then integration tests are run.
4. Build is scheduled in jenkins to run at a specific time ( cron jobs ).
5. If build breaks, then look through the stack and see if any file changes were missed, or config changes were not made,replicate the problem in local work space
and fix, git commit, rebuild forcefully in jenkins.
At iTutor, different developers are working on Modules say release version 2.0 and 1.5 of itutorlms simultaneously in 2 git branches,so these needs to be checked in to git with the maven repository and collected by jenkins to be sent over to testing env. where selenium integration tests are run against the build.
java -jar jenkins.war

start jenkins job:-
(Jenkin_url)/safeRestart: Allows all running builds to complete

Jenkins_Home – all settings, build artifacts and logs – run cron job to copy this directory.
Configure Job – Create a new project –> select free-style job –> click OK to create new job.
Jenkins is integrated with Maven,Git
How To Create BatFile?
– Create a folder named LIB in you project directory and place all your jar files there.
– Open notepad and insert below string in it. Save that file with run.bat in your project directory.
java -cp bin;LIB/* org.testng.TestNG testng.xml
Open Command Prompt and SetClassPath:
set classpath=C:\Users\Selenium\MCS\bin;C:\Users\Selenium\MCS\LIB\*
DevOps – Jenkins – Continuous Integration

DevOps – Puppet

Puppet is a configuration tool used to automate administrative tasks.
Puppet agent (client) sends request to puppet master(server) and Puppet master pushes configuration to puppet slave.
Manifest:-You specify client configs in manifest and then they are pushed in to specific node.
Manifest ordering:- By default, puppet applies resources the way they are ordered in manifest, but,
Module:- You make calls to manifests in modules so managing manifests is easier.
certificate location :- /var/lib/puppet/ssl/ca/signed
Configuration data:- /etc/puppet or /var/lib/puppet
Factor – helps in writing manifests based on agent specific data such as IP address, CPU info,$operatingsystem, etc by running # facter in a shell.
Puppet agent – use of etckeeper-commit-post / pre – configuration file used to define the scripts to execute before and after pushing configurations on agent.
Puppet Kick – utility which allows you to trigger Puppet Agent from Puppet Master.
Mcollective – orchastration framework used to run actions on thousands of servers simultaneously using plugins.
Puppets model-driven design architecture – Puppet models everything – current state of the node(puppet resource), desired configuration state(puppet apply) and actions taken during each configuration changes. So each node agent recieves a catalog of resources,compares it to current state and make changes to      bring it to desired state.
DSL configuration language for manifests.
Class names, modules, resources types and parameters, variables are lower case.
How we use Puppet at iTutor:-
We have automated the configuration and deployments of Linux-ubuntu 14.04, LAMP stacks as defined in manifests customizing modules obtained from such as puppetlabs/mysql,puppetlabs/aws,etc. reducing the processing time from an hour to 5 minutes. We have documented purpose of each module in GIT. Modules are still being further customized to make significant gains.
We worked with development team to automate configurations in files such as database.php, php.ini,etc. files and include in manifests for all environments.
Tools workflow:-
Change Request is via Jira, Git is used to manage the puppet code, check out and commit and push it to master repository. It gets picked up by Jenkins(CI tool) and deployed on CI environment where integration test are kicked off for the build, before being deployed to staging and production
Version of Puppet we use:-
Version – Puppet Enterprise 2016.4 / 4.0/3.x,y,z
Practically – installation and commands and puppet forge modules for manifests for aws , lamp, mysql
Puppet commands:-
puppet ca list  — lists certificates
puppet ca sign — signs certificates
puppet parser validate selinux.pp  — check syntax validations of manifest
puppet-lint init.pp – for everyone to follow standards/conventions
puppet module install puppetlabs-aws –version 1.4.0   — install modules via
When we upgraded to ubuntu xenial 16.04, we had issues with puppet
Maange users and manage code
Writing automated tests – testing system, modules, catalog
Installation of puppet :- puppet master, puppet db , PE console in one node and agent pulls manifests from puppet master and applies them.
Resources  are used for modelling configurations.
guest $: puppet resource user root
user { ‘root’:
ensure          => ‘present’,
comment         => ‘root’,
home            => ‘/root’,
password        => ‘$1$v4K9E8Wj$gZIHJ5JtQL5ZGZXeqSSsd0’,
Resouces are defined:- define the ‘state’ of the Guest user – Puppet DSL below is configuration language
guest $: cat guest.pp
user { ‘guest’:
ensure          => ‘present’,
comment         => ‘Managed by Puppet’,
password        => ‘$1$v4K9E8Wj$gZIHJ5JtQL5ZGZXeqSSsd0’,
guest $: puppet resource user guest
user { ‘guest’:
ensure => ‘absent’,
Run it in simulation mode as guest is absent above:- –noop is comparison of current state to future state of system.
guest $: puppet apply guest.pp –noop
Notice: Compiled catalog for master in environment production in 0.13 seconds
Notice: /Stage[main]/Main/User[guest]/ensure: current_value absent, should be present (noop)
Notice: Finished catalog run in 0.16 seconds
guest $: puppet apply guest.pp
Notice: Compiled catalog for master in environment production in 0.14 seconds
Notice: /Stage[main]/Main/User[guest]/ensure: created
Notice: Finished catalog run in 0.27 seconds
guest $: puppet resource user guest
user { ‘guest’:
ensure           => ‘present’,
comment          => ‘Managed by Puppet’,
home             => ‘/home/guest’,
password         => ‘$1$v4K9E8Wj$gZIHJ5JtQL5ZGZXeqSSsd0’,
So cron jobs can be setup and configurations applied to correct system.
For modules:- puppet code/manifests
Puppet mysql configurations :-
Puppet manifests need to include following:-
To customize options, such as the root password or /etc/my.cnf settings, you must also pass in an override hash:
class { ‘::mysql::server’:
root_password           => ‘strongpassword’,
remove_default_accounts => true,
override_options        => $override_options
Create new database:-
mysql::db { ‘mydb’:
user     => ‘myuser’,
password => ‘mypass’,
host     => ‘localhost’,
grant    => [‘SELECT’, ‘UPDATE’],
Import data :- timeout default is 300
mysql::db { ‘mydb’:
user     => ‘myuser’,
password => ‘mypass’,
host     => ‘localhost’,
grant    => [‘SELECT’, ‘UPDATE’],
sql      => ‘/path/to/sqlfile.gz’,
import_cat_cmd => ‘zcat’,
import_timeout => 900,
Managing AWS manifests:-
puppet module install puppetlabs-aws –version 1.4.0

Managing LAMP stack manifests:-

puppet module install darkmantle-lamp –version 1.1.0

Scaling Puppet / Tuning Puppet:-
By default, puppet agent checks heartbeat every 30 minutes if new configuration is available. Do not run puppetd as a daemon.
We have added a cronjob for puppet agent with –onetime, so we can set different intervales for heartbeats on different nodes.
Also, /ext/Puppetrun can be configured to send configurations to selective clients. Instead of using daemon, we use mcollective to launch puppet agent with –onetime option.

We use GIT to rsync data,manifests between several nodes and then run puppet apply locally to apply changes to the nodes via cron.


DevOps – Puppet