What is Docker? Docker is mainly a software development platform and a kind of virtualization technology that makes it easy for us to develop and deploy apps inside of neatly packaged virtual containerized environments. Meaning apps run the same, no matter where they are or what machine they are running on. Docker containers can be deployed to just about any machine without any compatibility issues, so your software stays system agnostic, making software simpler to use, less work to develop, and easy to maintain and deploy.
These containers running on your computer or server act like little microcomputers with very specific jobs, each with their own operating system and their own isolated CPU processes, memory, and network resources. And because of this, they can be easily added, removed, stopped, and started again without affecting each other or the host machine. Containers usually run one specific task, such as a MySQL database or a NodeJS application, and then are networked together and potentially scaled. A developer will usually start by accessing DockerHub, an online cloud repository of Docker containers, and pull one containing a pre-configured environment for their specific programming language, such as Ruby or NodeJS, with all of the files and frameworks needed to get started.
Home users can experience Docker as well, using containers for popular apps like Plex media server, NextCloud, and many other open-source apps and tools, many of which we will be installing in upcoming episodes. Docker is a form of virtualization, but unlike virtual machines, the resources are shared directly with the host. This allows you to run many Docker containers where you may only be able to run a few virtual machines. A virtual machine has to quarantine off a set amount of resources, HDD space, memory, and processing power, emulate hardware, and boot an entire operating system. Then the “VM” communicates with the host computer via a translator application running on the Host Operating System called a “Hypervisor.”
Docker communicates natively with the system kernel, bypassing the middleman on Linux machines and even Windows 10 and Windows Server 2016 and above. This means you can run any version of Linux in a container, and it will run natively. Not only that, Docker uses less disk space too, as it is able to reuse files efficiently by using a layered file system. If you have multiple Docker images using the same base image, for instance, Docker will only keep a single copy of the files needed and share them with each container.
Alright. So, how do we use Docker? Install Docker on your machine; we’ll provide links in the description. You begin with a Dockerfile, which can be built into a Docker image, which can be run as a Docker Container. Ok, let’s break that down. The Dockerfile is a surprisingly simple text document that instructs how the Docker image will be built, like a blueprint. You first select a base image to start with, using the keyword FROM
, and you can find a container to use from DockerHub like we mentioned before. Ubuntu or Alpine Linux are popular choices. And from there, you can RUN
commands such as downloading, installing, and running your software. We’ll, of course, link the docs below.
Once the Dockerfile is complete, we can build it using Docker build
, followed by the -t
flag so we can name our image and pass our command the location of the Dockerfile. Once complete, you can verify your image’s existence with docker images
. Now, with your built image, you can run a container of that image or push it to the cloud to share with others. Speaking of sharing with others, if you don’t create your own Docker image and just want to use a pre-made one, you can pull one from DockerHub using Docker Pull
and the image name. You may also include a tag if one is available, which may specify a version or variant of the software. If you don’t specify one, the latest version will be fetched.
To run a container, pull it down from DockerHub or build the image and enter docker run
followed by the image name. There are, of course, many options available, such as running the container “detached” using -d
and assigning ports for web services. You can view your running containers with docker container ls
, and as you add more, they will appear here. Running single containers is fun, but it’s annoying to enter all of these commands to get a container running, and we may want to control several containers as part of a single application, such as running an app and a database together, something you might want to do to run WordPress, for example.
We’re going to be accomplishing that with Docker Compose in our next video, where we build our 23 TB home server. If you are going to have a secure home server, you HAVE to get yourself a domain name for SSL so you can access that server anywhere in the world. Thank you to Hover.com for supplying you with your first domain at 10% off by visiting Hover.com/TechSquid. Check out their “Find a Domain” tool to find cool domain names by entering keywords. Since we are building a home server, here are some home server domains available at the current moment. Get them fast, support the show, and get 10% off by visiting Hover.com/TechSquid.
Thank you for watching. Let me know if you’d be interested in a part 2 where we go deeper into Docker and go over the specifics. Stick around, coming up: building a 23 terabyte server and a Docker Compose tutorial coming your way. If you have any questions, come join us on Discord! Everything you need is in the description below. See you in the next video.