Docker is an OS (especially Linux at the moment) level facility/technology that can build OS images dedicated all the way to the application level in multiple layers. Developers will build docker application images instead of application packages where by the application images are stand alone and can run on any hardware on *any* OS. The images are layered and can be shared publicly or privately. Docker provides the verbs and scripts necessary to build, deploy, and monitor application images running in containers on a native OS. Docker containers are significantly lightweight compared to virtual machines making distributed, scalable, multi-tenanted computing engines in the cloud effective and efficient.

Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.

See some basic research on docker and its landscape here first

A set of key videos for docker

Basics of Docker

Search for: Basics of Docker

docker homepage

what kind of tooling is available around docker?

Search for: what kind of tooling is available around docker?

Very highlevel basics of docker is here

Ability to package end user applications with all thier dependencies all the way to the OS level.

The dependencies are layered starting with the OS at the base level.

These are called layered images.

A unix platform then is capable of starting these images as a co-process with in 100 milli-seconds.

Each application can then live in its own images.

Control-groups and namespaces are two OS level technologies that are used to implement something like docker

docker is implemented in "go" and runs as an executable on intel Unix machines.

Docker provides a vocabulary to build script files to compose images.

These images can then be run on any machine in an infrastructure

It is likely that technologies like Redhat OpenShift and IBM Bluemix or Heroku are similar in nature.

There will be likely higher level tools to monitor these containers and manage them at a high level.

This technology is a few years aold and there is also standardization efforts at the OS level.

Technology like this will allow to see the infrastructure not as a collection of computers but a single computer that is disributed behind the scenes.

Docker seem to be simple enough to be envisioned and used by developers and not only by steeped sys admins.

CoreOS and Rocket are similar technologies to docker

Docker is like VMs but much light weight technology

One could imagine a consumer world where a consumer can maintain a series of OS images to move back and forth their OS copy to recover (possibly..)

Docker Rocket CoreOS Kubernetes Mesos

Search for: Docker Rocket CoreOS Kubernetes Mesos


Docker
Rocket
CoreOS
Kubernetes
appC

Here is some discussion on kubernetes

Kubernetes

Search for: Kubernetes

Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.


Google Compute Engine
Vagrant
Fedora (Ansible)
Fedora (Manual)
Microsoft Azure
Rackspace
CoreOS
vSphere
Amazon Web Services
Mesos

etcd

Search for: etcd

One of the fundamental components that Kubernetes needs to function is a globally available configuration store. The etcd project, developed by the CoreOS team, is a lightweight, distributed key-value store that can be distributed across multiple nodes.

What is Flannel?

Search for: What is Flannel?

Demystifying Kubernetes: the tool to manage Google-scale workloads in the cloud

What is Mesos?

Search for: What is Mesos?

Kubernetes and Azure

Search for: Kubernetes and Azure

An Intro to Google?s Kubernetes and How to Use It

Kubernetes sample script

Search for: Kubernetes sample script


{
   "kind":"ReplicationController",
   "apiVersion":"v1beta3",
   "metadata":{
      "name":"redis-master",
      "labels":{
         "name":"redis-master"
      }
   },
   "spec":{
      "replicas":1,
      "selector":{
         "name":"redis-master"
      },
      "template":{
         "metadata":{
            "labels":{
               "name":"redis-master"
            }
         },
         "spec":{
            "containers":[
               {
                  "name":"master",
                  "image":"redis",
                  "ports":[
                     {
                        "containerPort":6379,
                        "protocol":"TCP"
                     }
                  ]
               }
            ]
         }
      }
   }
}

what is redis?

Search for: what is redis?

Redis is a data structure server. It is open-source, networked, in-memory, and stores keys with optional durability. The development of Redis has been sponsored by Pivotal Software since May 2013; before that, it was sponsored by VMware.

Boomi Redis connector

Search for: Boomi Redis connector

sample docker file

Search for: sample docker file

Books on Docker

Search for: Books on Docker


Virtual Machines
Application Containers

Docker, libcontainer, libchan, libswarm
Docker Hub
Kubernetes
Mesos
Fleet
 
Control Groups
Name Spaces
appC
CoreOS
Rocket
nspawn
 
Google Compute Engine
Vagrant
Fedora (Ansible)
Fedora (Manual)
Microsoft Azure
Rackspace
vSphere
Amazon Web Services
Etcd
Flannel
systemd
beanstalk

Docker (Basic idea)
Kubernetes (Management)

1. Ability to package end user applications with all their dependencies all the way to the OS level. It is the idea of a contained VM but without the VM overhead.

2. The dependencies are layered starting with the OS at the base level. A single underlying Linux OS can run any other unix OS on top of it as a docker process (Not VM)

3. Docker does this through layered images.

4. A unix platform then is capable of starting these images as a co-process with in 100 milli-seconds.

5. Each application can then live in its own images.

6. Control-groups and namespaces are two OS level technologies that are used to implement something like docker

7. docker is implemented in "go" and runs as an executable on intel Unix machines.

8. Docker provides a vocabulary to build script files to compose images.

9. These images can then be run on any machine in an infrastructure by sending commands to the Docker daemon running on that server

10. It is likely that technologies like Redhat OpenShift and IBM Bluemix or Heroku are similar in nature or will use Docker underneath

11. There will be likely higher level tools to monitor these containers and manage them at a high level: Example Kubernetes from Google

12. This technology is a few years old and there is also standardization efforts at the OS level (appC, CoreOS, Rocket)

13. Technology like this will allow to see the infrastructure not as a collection of computers but a single computer that is distributed behind the scenes.

14. Docker seem to be simple enough to be envisioned and used by developers and not only by steeped sys admins.

15. CoreOS and Rocket are similar technologies to docker

16. Docker is like VMs but much light weight technology

17. One could imagine a consumer world where a consumer can maintain a series of OS images to move back and forth their OS copy to recover (possibly..)

1. Its design and face to the developers and admins is easy and simple

2. You write script files to control deployments precisely with virtualized hardware

3. You have a well defined vocabulary to control deployments in a virtual world

1. Most cloud based systems like heroku, openshift, bluemix may/will use docker underneath

2. It is possible deployment tools in an enterprise will shift to this model (it is compelling enough that it will be everywhere in a few years)

1.Your exposure to these containers will be mostly behind the scenes in the cloud or your enterprise

2.Your deployment unit will be now a DOCKER image and NOT an Executable or a War or a EAR etc.

3.You will need to know the ?verbs? used by Docker and Kubernetes to write instructions on how to install your applications

4.Significant portability to your applications between environments and machines

1. It is a language and environment agnostic software container

2. It is NOT a container like salesforce or Boomi or Siebel or PEGA which are TRULY end user application containers in a CONTROLLED environment

3. The need for later is not solved by Docker

4. As a result massive DEVELOPMENT benefits will not be realized

5. YES the deployment environment becomes easier

6. IT is likely that the ?CONTROLLED? ?DEDICATED? environments may be constructed out of Docker

Docker is an OS (especially Linux at the moment) level facility/technology that can build OS images dedicated all the way to the application level in multiple layers. Developers will build docker application images instead of application packages where by the application images are stand alone and can run on any hardware on *any* OS. The images are layered and can be shared publicly or privately. Docker provides the verbs and scripts necessary to build, deploy, and monitor application images running in containers on a native OS. Docker containers are significantly lightweight compared to virtual machines making distributed, scalable, multi-tenanted computing engines in the cloud effective and efficient.

Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.

Redis is really really cool!: Here is my research log on it