INDUSTRY
PRACTICES AND TOOLS
VERSION
CONTROLLING
Software systems grow larger and more complex
it is difficult to handle a whole project. So it is very important to manage
the source of different stages. So version control systems are a category of software
tools that help a software team to manage changes to source code over time.
VCS keeps every modifications, in special
kind of databases. So if you have a mistake, then the developer can turn back
and get the earlier one. There are three types of version controlling systems.
01. Local version control
systems.
02. Centralized version
control systems.
03. Distributed version
control systems.
A component of software configuration management, version control, also known as revision control or source control is the management of
changes to documents, computer programs, large web sites,
and other collections of information.
The need for a logical way to organize and
control revisions has existed for almost as long as writing has existed,
but revision control became much more important, and complicated when the era
of computing began. The numbering of book editions and of specification revisions are examples that date back to
the print-only era. Today, the most capable (as well as complex) revision
control systems are those used in software development, where a team of
people may concurrently make changes to the same files.
Version control
systems (VCS) most commonly run as stand-alone
applications, but revision control is also embedded in various types of
software such as word processors and spreadsheets.
LOCAL VERSION CONTROL
SYSTEMS
This
one is the oldest version of VCS. In this VCS everything in the computer. So
the big issue is it cannot be used for collaborative development.
This
tool basically works by keeping patch sets (that is, the difference between the
file's content at progressive stages) using a special format in the version
tracker that is stored in your local hard disk.
It
can then recreate the file's contents exactly at any given point in time by adding
up all the relevant patches in order and "checking it out"
(reproducing the content to the user's workplace).
CENTRALIZED VERSION
CONTROL SYSTEMS
Local
version control system cannot be used for collaborative software development.
But centralized version can be used for all collaborative purposes. Everyone
know their each works and most obvious is the single point of failure the
centralized server represents.
People were not able to work collaboratively on the same
project, as the files with their versions are stored in somebody's local
computer and were not accessible to other people working on the same files.
DISTRIBUTED VERSION
CONTROL SYSTEMS
The best VCS. Because no single point of
failure can be occurred. Simply said that all client is a server machine.
Clients don’t just check out the latest snapshot of the files. If one server
destroyed others can be copied back. System can collaborate with different
groups of people in different ways.
Whenever
you bet big time on one single unit, the probability of losing is
also high. Yes, there is a high degree of risk involved in using a centralized
version control system because the users only have the last version of files in
their system for working purposes; there is a chance you might ultimately lose
the entire history of your files if the server gets corrupt and if you don't
have fail-safe procedures implemented.
Now
people get confused. You risk everything when you store your entire history in
one single place using a centralized version control concept; on the contrary
you lose the power to work collaboratively when you use local version
control.
GIT
GIT
is the most commonly used version control system today because it has so many
good features such as speed, data integrity and etc. This one belongs to
distributed version control system.
This
is a free and open source distributed version control system designed to handle
everything from small to very large projects with the speed and efficiency. GIT
is easy to learn and has a tiny footprint with lightning fast performance.
There
are popular implementations of GIT such as GIThub, GitLab, GitBucket,
CloudForge and etc.
GIT TERMINOLOGY
BRANCH
- A branch represents an independent line of development.
Branches serve as an abstraction for the edit/stage/commit process discussed in
Git Basics, the first module of this series. You can think of them as a way to
request a brand new working directory, staging area, and project history. New
commits are recorded in the history for the current branch, which results in a
fork in the history of the project.
HEAD - Git’s way of
referring to the current snapshot. Internally, the git checkout command simply
updates the HEAD to point to either the specified branch or commit. When it
points to a branch, Git doesn't complain, but when you check out a commit, it
switches into a “detached HEAD” state.
HOOK - A script that runs
automatically every time a particular event occurs in a Git repository. Hooks
let you customize Git’s internal behavior and trigger customizable actions at
key points in the development life cycle.
MASTER - The default
development branch. Whenever you create a git repository, a branch named
"master" is created, and becomes the active branch.
FULL REQUEST -
Pull requests are a feature that makes it easier for developers to collaborate
using Bit bucket. They provide a user-friendly web interface for discussing
proposed changes before integrating them into the official project.
REPOSITORY - A
collection of commits, and branches and tags to identify commits.
WORKING TREE -
The tree of actual checked out files, normally containing the contents of the
HEAD commit's tree and any local changes you've made but haven't yet committed.
CLONE – This is an exact
copy of existing repository.
TAG - A reference
typically used to mark a particular point in the commit chain. In contrast to a
head, a tag is not updated by the commit command.
GIT WORKFLOW -
The Git flow Workflow streamlines the release cycle by using isolated branches
for feature development, release preparation, and maintenance. Its strict
branching model also lends some much needed structure to larger projects.
CONTENT
DELIVERY/DISTRIBUTION NETWORK [CDN]
A
content delivery/distribution network (CDN) is a system of distributed servers
(network) that deliver pages and other web content to a user, based on the
geographic locations of the user, the origin of the web page and the content
delivery server.
This
service is effective in speeding the delivery of content of websites with high
traffic and websites that have global reach. The closer the CDN server is to
the user geographically, the faster the content will be delivered to the user.
CDNs also provide protection from large surges in traffic.
HOW CDNs WORK
Servers nearest to the website visitor
respond to the request. The content delivery network copies the pages of a
website to a network of servers that are dispersed at geographically different
locations, caching the contents of the page. When a user requests a web page that is part of a content delivery network, the CDN will redirect the request
from the originating site's server to a server in the CDN that is closest to
the user and deliver the cached content. CDNs will also communicate with the
originating server to deliver any content that has not been previously cached.
The process of bouncing through CDNs is
nearly transparent to the user. The only way a user would know if a CDN has
been accessed is if the delivered URL is different than the URL that has been
requested.
BENEFITS OF USING A CDN
·
significantly
reduced page load time of your website
·
increased
revenue by 1% for every 100 Ms of improvement to your page load time
·
retaining
more customers (they are more satisfied)
·
more
manageable traffic
·
maximum
availability of your product
·
more
secure network
·
no
geographical barriers
·
easy
delivery of video, audio rich content
·
build
more interactive website at no cost of losing visitors due to latencies
·
reaching
mobile customers with ease
·
branching
out to new markets, regions
·
easy
management of traffic peaks
·
more
scalability to your business, you can grow it as much as you want to
·
less or
now down times
·
setting
your own criteria to enable the best possible performance for your website
VIRTUALIZATION
There are some gaps between development and
implementation environment. Because sometimes it can be different platforms,
missing dependencies, frameworks and also wrong configurations. So in that case
this issue can be overcome by develop software/project in a virtual
environment.
The main goal of virtualization
is to manage workloads by radically transforming traditional computing to make
it more scalable. Virtualization has been a part of the IT landscape for
decades now, and today it can be applied to a wide range of system layers,
including operating system-level virtualization, hardware-level virtualization
and server virtualization.
There
are five types of virtualization.
01. Application virtualization
02. Desktop virtualization
03. Hardware virtualization
04. Network virtualization
05. Storage virtualization
Application
virtualization - This is a process where applications are virtualized
and delivered from a server to the end user’s device, such as laptops,
smartphones, and tablets. So instead of logging into their computers at work,
users will be able to gain access to the application right from their device,
provided an Internet connection is available. This is particularly popular for
businesses that require the use of their applications on the go.
Desktop
virtualization - Similar to application virtualization mentioned above,
desktop virtualization separates the desktop environment from the physical
device and configured as a “virtual desktop infrastructure” (VDI). One of the
biggest advantages of desktop virtualization is that users are able to access
all their personal files and applications on any PC, meaning they can work from
anywhere without the need to bring their work computer. It also lowers the cost
of software licensing and updates. Maintenance and patch management are simple,
since all of the virtual desktops are hosted at the same location.
Hardware
virtualization - Perhaps the most common type of virtualization today,
hardware virtualization is made possible by a virtual machine manager (VM) called
the “hypervisor”. The hypervisor creates virtual versions of computers and
operating systems and consolidates them into one large physical server, so that
all the hardware resources can be utilized more efficiently. It also enables
users to run different operating systems on the same machine simultaneously.
Network
virtualization - Network virtualization combines all physical networking
equipment into a single, software-based resource. It also divides available
bandwidth into multiple, independent channels, each of which can be assigned to
servers and devices in real time. Businesses that would benefit from network
virtualization are ones that have a large number of users and need to keep
their systems up and running at all times. With the distributed channels, your
network speed will increase dramatically, allowing you to deliver services and
applications faster than ever before.
Storage
virtualization - This type of virtualization is very easy and
cost-effective to implement, since it involves compiling your physical hard
drives into a single cluster. Storage virtualization is handy when it comes to
planning for disaster recovery, since the data stored on your virtual storage
can be replicated and transferred to another location. By consolidating your
storage into a centralized system, you can eliminate the hassles and costs of
managing multiple storage devices.
System Requirements for Virtualization
Review the following requirements
for the Virtual Server Protection (VSP) package.
General
Package Requirements
The computer on which you plan to
install the package must satisfy the following system requirements:
Operating
Systems
|
·
Microsoft
Windows Server 2016 x64 Editions
·
Microsoft
Windows Server 2012 R2 x64 Editions
·
Microsoft
Windows Server 2012 x64 Editions
·
Microsoft
Windows Server 2008 x64 Editions
|
Hard
Drive
|
500 GB
recommended.
Depending
upon the number of virtual machines that you are planning to backup, ensure
that the backup server computer has sufficient free space to store all
virtual machine data.
|
Memory
|
16 GB
RAM minimum required
|
Processor
|
All
Windows-compatible processors supported
|
IIS
|
IIS
must be enabled on the backup server.
|
Supported
Web Browsers for the Command Center
You can run the Command Center on
the following browsers:
·
Apple Safari
version 8.0 and later
·
Google
Chrome v40 and later
·
Microsoft
Edge
·
Microsoft
Internet Explorer (IE) v10 or later
·
Mozilla
Firefox v47.0 or later
Database
Engine
Microsoft SQL Server 2014 Express
edition is automatically installed during the installation of the package.
The maximum database size for
Microsoft SQL Server 2014 Express edition is 10 gigabytes. If your Express
edition database approaches the 10 gigabyte limit, a critical event will appear
in the dashboard. To increase the size of the database, you can upgrade to
Microsoft SQL Server 2014 Standard edition.
PROS
AND CONS OF VIRTUALIZATION
PROS –
·
Capital
expenditure savings. Virtualization
lets companies reduce their IT costs by requiring fewer hardware servers and
related resources to achieve the same level of computing performance,
availability and scalability.
·
Operational
expenditure savings. Once
servers are virtualized, your IT staff can greatly reduce the ongoing
administration and management of manual, time-consuming processes by automating
operations, thus resulting in lower operational expenses.
·
Data
centre and energy-efficiency savings. As
companies reduce the size of their hardware and server footprint, they lower
their energy consumption, cooling power and data centre square footage, thus
resulting in lower costs.
CONS –
· The
upfront costs are heavy
· Efficient
resource utilization
· Not
all hardware or software can be virtualized
HYPERVISOR
A hypervisor or
virtual machine monitor (VMM) is computer software, firmware or hardware that creates
and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host
machine, and each virtual machine is called a guest machine.
Type-1
native or bare-metal hypervisors
These hypervisors run directly on
the host's hardware to control the hardware and to manage guest operating
systems. For this reason, they are sometimes called bare mental hypervisors.
The first hypervisors, which IBM developed in the 1960s, were native
hypervisors.
Type-2 or hosted hypervisors
These hypervisors run on a
conventional operating system (OS) just as other computer programs do. A guest
operating system runs as a process on the host. Type-2 hypervisors
abstract guest operating systems from the host operating system.
DIFFERENT BETWEEN VM AND EMULATOR
Virtual machines make use of CPU
self-virtualization, to whatever extent it exists, to provide a virtualized
interface to the real hardware. Emulators emulate
hardware without relying on the CPU being able to run code directly and
redirect some operations to a hypervisor controlling the virtual container.
VMs VS CONTAINERS
VMs - A
virtual machine (VM) is an emulation of a computer system. Put simply, it makes
it possible to run what appear to be many separate computers on hardware that
is actually one computer.
The
operating systems (“OS”) and their applications share hardware resources from a
single host server, or from a pool of host servers. Each VM requires its own
underlying OS, and the hardware is virtualized. A hypervisor, or a virtual
machine monitor, is software, firmware, or hardware that creates and runs VMs.
It sits between the hardware and the virtual machine and is necessary to
virtualize the server.
Since the
advent of affordable virtualization technology and cloud computing services, IT
departments large and small have embraced virtual machines (VMs) as a way to
lower costs and increase efficiencies.
BENEFITS OF VMs
- All OS resources available
to apps
- Established management tools
- Established security tools
- Better known security
controls
CONTAINERS - With containers, instead of virtualizing the
underlying computer like a virtual machine (VM), just the OS is virtualized.
Containers
sit on top of a physical server and its host OS — typically Linux or Windows.
Each container shares the host OS kernel and, usually, the binaries and
libraries, too. Shared components are read-only. Sharing OS resources such as
libraries significantly reduces the need to reproduce the operating system
code, and means that a server can run multiple workloads with a single
operating system installation. Containers are thus exceptionally light — they
are only megabytes in size and take just seconds to start. Compared to
containers, VMs take minutes to run and are an order of magnitude larger than
an equivalent container.
In
contrast to VMs, all that a container requires is enough of an operating
system, supporting programs and libraries, and system resources to run a
specific program. What this means in practice is you can put two to three times
as many as applications on a single server with containers than you can with a
VM. In addition, with containers you can create a portable, consistent
operating environment for development, testing, and deployment.
BENEFITS OF CONTAINERS
- Reduced IT management
resources
- Reduced size of snapshots
- Quicker spinning up apps
- Reduced & simplified
security updates
- Less code to transfer,
migrate, upload workloads

No comments:
Post a Comment