Introduction good afternoon to all again I'd like to welcome you to today's webinar with the introduction to our managed kubernetes offering here at one and one Ayanna's my name is Benjamin Schmidt I'm a technical consultant at one-on-one ayano's since over two years and have been part of the kubernetes project team since actually the first conception of this new service in this webinar I will first introduce the topic kubernetes in general then we will take a deeper look actually what is the managed in the managed kubernetes what do we from one-and-one Ayanna's actually are managing then how is the architecture looking like when you deploy your first cluster and of course the most important topic I think for most how do I actually create a kubernetes cluster so the individual steps to create your first cluster then we will talk a little about pots teaming sets and additional services you might see and which are specific to the i/o no scuba Nita's clusters we will talk about persistent volumes storage classes that we have also implemented and are making available for you and last but not least a small demo how do we actually use it how is it looking from a kubernetes perspective to deploy a service and especially how are we using the persistent volumes as a small note about the knowledge prerequisites for this webinar I will not provide an introduction to kubernetes itself in the webinar so the assumption is that actually all attendees have a basic understanding about what kubernetes is and about the different concepts and resources it uses so names like pots replica assets or daemon sets persistent volumes all these I assume are familiar to the attendees and I will not explain those terms anymore if you do have any questions for for this webinar we will have a question-and-answer session at the end and please use the GoToWebinar question function to simply post your question there so to begin with I'd like to make an analogy to a poem by Johann Wolfgang gritter called The Sorcerer's Apprentice The Sorcerers Apprentice which is actually one of greatest most popular texts The Magician's apprentice had wanted to ease his own life by calling brooms to life with the help or to help with daily chores but after shortly after using this powerful magic the apprentice needs to realize that living brooms I'm not that easy to handle after all so things get out of hand quickly and the apprentice is looking for his master to get a grip on the situation he called out and now I quote the actual poems still there running wet and wet her down the steps the waters falling how appalling all this water Lord and Master hear me calling oh here comes my master help me Lord I plead spirits I have conjured no longer pay me heed end of code we might have felt the same way as The Apprentice did when introducing micro-services and containers into our organization it was easy to handle when it was just a few microservices or when containers was purely a topic for developers but with the increase in numbers of services and the adoption of containers even for production workloads things can easily get overwhelming and hard to handle and that's where actually kubernetes then enters the picture ask the master magician or the orchestrator of containers finally getting a control of all those containers again however kubernetes itself is not that easy to set up and manage as I can think a lot of you have already found out a lot of magic is happening under the hood which is fine for as long as everything is running but then as when things get complicated in case of a host failure cluster failure and that's actually when deep kubernetes know-how might actually be necessary now this is where a managed kubernetes provider comes in handy instead of having to build up kubernetes in-house you can simply consume a consumer kubernetes cluster without all the operational overhead and with all the knowledge needed to run such and that is actually our motivation for us to offer a managed kubernetes service What is Managed but what is actually the managed inside the managed kubernetes we're focusing on three main strengths with our new product first of all it is fully automated second fully integrated and third fully supported but fully automated we mean that the cluster is set up fully automated when it is provisioned by a user the provisioning takes place via our usual cloud API and you are provided with the control plane which contains the components of the master nodes and it is set up in a highly available and geo redundant way you have access to cube control you can download that via the API and through cube control you have cluster admin privileges also part of fully automated is the lifecycle management of kubernetes versions so we are taking care about updates security patches this is not part of the better but will be coming soon after the commercial launch which is planned for beginning of September fully integrated means that it is integrated into our existing infrastructure as a service offering the ayano's Enterprise Cloud it is also integrated in the way that we're integrating a persistent storage into the offering we have implemented our own CSI based storage classes we will go a little into details about that later and it is also fully integrated into our data center designer we're currently in the final phase of this implementation so you will see that going live for the commercial launch as I mentioned beginning of September it is fully in the sense that we are providing the same kind of support as we are doing for our enterprise cloud we have the team of professional services with qualified cloud consultants which will help you move to kubernetes which can assist you when it comes to designing it and basically giving you advice on the architecture of kubernetes itself and off on top of that of course the normal incident management process is handled by our 24/7 support steam there are also experienced COO burnouts and really offer a enterprise-grade support Architecture now how does the architecture actually look like well first of all what you might know as the masternodes are all abstracted away inside a centrally ayano's managed control plane as a user you will not see any master nodes inside of your kubernetes clusters the components which are usually running inside the master node such as the kubernetes api or the cuban ATIS controller the scheduler and the storage card the HDD they're each running as containers inside a an eye on us manage kubernetes cluster this cluster itself is set up to be highly available and it is even geo redundant in that that kubernetes cluster spans our three German data centers so it is spanned across the Frankfurt culture and Berlin data center each cluster then has at least one so-called node pool this is something that you provision by the API or later on also you will the data center designer a node pool is actually a concept which we have taken over from Google they're using also the concept of node pools which is actually a group of nodes within a cluster that all have the same configuration configuration in the sense of same amount of course type of CPU same amount of RAM and same amount of root storage so you will actually not provision and configure individual worker nodes but they will be grouped as a node pool if you want to utilize a highly available setup you can provision the node pools even in different availability zones this is pictured here we have two zones within each BDC so you're free to select the node pool to the particular zone and you can even create zones in different virtual data centers differ a virtual data center even being in a different physical location the name of the node pool that you are providing and the UUID of the virtual data center they are added as labels to the worker nodes and so when deciding where to place which pot you can use the normal kubernetes methods for this namely labels so you can define the pot affinity and anti affinity rules you can use node selectors or no taints or toleration x' to basically manage where each pot or service is running in which zone in which virtual data center in which physical data center let's take a Cluster details look at the cluster details currently we are deploying kubernetes version one 15.2 you might see changes later on as time the rises we will certainly also offer newer versions for that the container runtime we have chosen its stalker and this is the current 18.9 dot seven version and as the whole system is built on the ayano's Enterprise Cloud we are also reusing the resources you find there one such resource is the operating system it's the image inside the enterprise cloud and there we have decided to base it on the latest sent OS 7 image offered there the details of the worker nodes or the configuration of the worker nodes it's of course left up to you to the user to define those as I said as it is based on the normal enterprise cloud resources you have the choice for the CPU architecture to choose between the Intel skylake the in Toxie an or the AMD processors the amount of that is something you provide the API call also the RAM is defined by you via the API call and the storage type so SSD or HDD as well as the size is also defined by you communication between the centrally-located control plane and your individual worker nodes is actually encrypted here by a secure TLS with mutual authentication and for the network provider we have chosen a project calico it's the the overlay Network self you might know other ones most have used what we've I assume calico is very powerful it allows also you as a user to gain access to its API and through that you can even for example define individual network policies it has one downside it does not provide itself encryption for communication between the worker nodes this is why we have plenty implemented ourselves encryption on top of that basically as an IPSec configuration that acts as encryption between the worker nodes for the overlay Network now to the actual of doing how How to create a cluster do we create a cluster there are two steps kind of as a preparation to it they're optional we have a new permission that is added to the groups inside the user accounts so you need to authorize your users within the account to be able to create new kubernetes clusters and then you can reuse any of your existing virtual data centers or of course you can also create a new virtual data center this slide is just an overview we will go into each step in the following slides then as well and the actual steps to create a cluster and to be able to use it then is a three-step process first you create the kubernetes cluster which basically deploys the control plane then you create the first node pool and then you will be able to download the cube config allowing you access to the kubernetes api so let's look at them in detail all Optional steps the information you find on this slide is actually taken from our clay deck cloud API reference documentation so you will find the same kind of information also in our devops portal which contains the documentation to the api but just here as reference point we have them all together so first optional step as I mentioned is to add a permission to an existing group so the API calls before the API call you basically need the group ID and you need to patch it or use the HTTP method to put inside the payload you will need to provide the name of the group although you will not edit the name it is still required for this particular call and you will need to add the create kubernetes cluster parameter and set that one to two that means all users using or inside of this particular group will have this particular permission to create new kubernetes cluster the second optional step is to create a new virtual datacenter I guess this call is familiar to most of our customers you will need to provide a name a description and the location the location being the physical data center so three in Germany is a frankfurt calls who or Berlin or if you're coming from the UK we have London data center or of course two data centers in the US Las Vegas and Newark in the response you will receive the UUID of the data center ID please save that one for since we will use it later on now to create the Create Kubernetes cluster kubernetes cluster we have introduced a new API resource called k8s again with a method post' we are creating new object and the only thing that is required is actually a name the response contains the UUID of the new kubernetes cluster again saved this one for later we will need to use it to create the first node glow and also interesting is that the kubernetes cluster has a state it is either in-state deploying that means it's being set up and you will need to wait until the status and state active to be able to create the first node pool you can also do a simple get on this particular resource to get an update of the state once isn't stayed active you are free to Create node pools create the first node pool so a set of worker nodes with identical configuration the configuration is directly part of the payload so you will provide first of all the data center ID either the one that you created in the steps before or of an existing data center a node pool has a name and then the the parameters the configuration of the node pool specifically of the worker nodes so how many worker nodes should my node will contain that's the parameter node count then the CPU architecture as I mentioned the number of course the number of RAM or the size of the RAM into which availability zones do you want to deploy your worker zones you can also leave it on auto which basically you leave it up to the provisioning system to select the zone for you then the storage type either HDD or SSD and the storage science applies here the minimum storage size is 10 gigabytes that is what we reserved for the operating system and for any other log files or any other files that are generated on the boot partition again here as a response you will receive a UUID and the node pool again has a state it is staged deploying when it's being built up and it is becoming state active once the node pools for us once all the worker nodes have been provisioned and configured so this might take a while Query node pools it can take actually more than 30 minutes depending also on the amount of worker nodes that are included in your note cool so please be patient you can continually Paul the node pool resource you can also directly Paul the node pool with the appendage UUID of the node cool but as you might be familiar we have also introduced the query parameter depth which allows you to get a collection of items which are actually contained underneath node pools and so you will get your node cool you you idea again with the state either being deploying or active in the final step you can download your cue config file Config file with the resource basically using your kubernetes ID again /q config do a get on it it contains a metadata but the actual cube config that you can copy and paste into an own file is inside the queue conflict parameter basically anything that is highlighted as right here on this slide simply copy and paste it into your own file convert the backslash n into a proper newline and then the file will also have the correct formatting and then basically you can directly connect to the kubernetes api ax by a q control one of the first things you will probably be doing once you have set up your first kubernetes cluster is seen which plots which daemon sets are already running inside of this cluster so you will see your whole lots that you are certainly familiar with but you will also see four of them which might be new to you they are shown here on this particular slide the first one is a set of hot steaming sets and services actually is all starting out with the name calico as mentioned project calico is the network provider that we're using inside the cluster to set it's actually the network policy engine so it has a certain set of pots that are running therefore as a controller and as a daemon set on each of the worker nodes the second one you will be seeing is a very jános specific one it's called CSI - Ayanna's cloud this is the container storage interface that we have implemented which takes care to provide block storages as persistent volumes for your pots it's also running as a daemon set on every worker node then you will see an engine x proxy running as a daemon set because the control plane is set up to be highly available we need and load balancer basically to access the control plane and this low balancer is running on each individual worker modes the final one you might not be familiar with it's called seal this is actually our own development our own implementation of the IPSec daemon which is needed to encrypt the communication between the worker and notes then you will also have Storage class the chance of course to see our existing storage class as I mentioned we are providing a default storage class you have the possibility to use persistent storage within the Iona's kubernetes cluster we can quickly go through the individual parameters which you will see if you do a queue control described storage class to it and so the default one it's called IO knows Enterprise - HDD it has the possibility or the parameter to allow volume expansion that means you can also increase the size of an already existing volume in the persistent volume claims simply add the new size or increase the the size to whatever you want it to be and the CSI the storage cloud storage class excuse me will actually expand the block volume and kubernetes will take care of increasing the file system and mounting it at the right spots inside the pod then we also have the reclaim policy it's set to delete the reclaim policy for assistant volume tells the cluster what to do with the volume after it has been released of its claim so currently in the default class it is set to delete which deletes the volume automatically when the claim is deleted but since this might not always be desired you might have some data on the volume that you might still need you could actually create an own storage class which is simply a copy of maybe for this one of this default storage class and set the reclaim policy to retained this basically tells them the CSI driver to keep thee or to retain the volume even if the persistent volume claim has been deleted so in this way you can then still gain access to the data on your volumes when a user is done with your volume then they can delete the the persistent volume claim from the API which allows the reclamation of the resource the second or the the other option you can set is the volume binding mode it is set to wait for first consumer and this mode it will delay the binding and the provisioning of a persistent volume until a pot is which is using the persistent volume claim has actually created the other option is to set it to immediate that means at the time you create the persistent volume claim immediately the persistent volume is also created and attached to any of the available pots this basically also means that the pot will need to run on the particular worker nodes on which the volume has been mounted then we also have a second storage class available the first one was based on HDD storage or block volumes as the enterprise cloud customers know we are also offering SSD volumes so no surprise we have a Iona's Enterprise SSD storage class basically it just edits or changes the parameter type it sets it from HDD to SSD nearby provisioning SSD volumes the same kind of capabilities as in the enterprise cloud apply here as well meaning the performance of our SSD volumes are based on the size of the volume so the larger the size the more guaranteed ayats you will receive as a user so please take that into consideration SSDs truly make sense once you go above 100 gigabytes with your persistent logger so now let's actually see how it is working live I Demo have prepared a system it's basically a fresh installation of one of our clusters it's based on the kubernetes version one 15.2 and I will simply want to show you a simple example we're calling it who are you service the service itself we will look at the Yongle definition the first part I'm creating a persistent vol-plane which actually utilizes the default storage class I would not have to mention it actually in here but for your sakes of course I have printed it here the only access mode that we are supporting is readwrite once which basically means that one part is able to access in a readwrite fashion this particular volume and you will need to define the size of the block volume again here starting with 10 gigabytes if you're using the SSD storage class please be above 100 gigabyte to be truly performant the second section inside the Yongle definition is then the deployment inside the deployment we are creating two things mainly first of all the volume and we are referencing the persistent volume claim that I had previously created directly via the claim name the second thing we're creating is just one container which contains a volume mount namely specifically that particular volume that had been defined at the path data and the last thing is we are creating a service out of that so we want to make our deployment accessible externally and here we are also offering a service with the type load balancer this means that you will receive a static IP address from the system over which you will be able to access the service let's deploy this younger file and let's see how the system is creating the individual resources we will do a watch Q control get all and the persistent volume claim we see the persistent volume claim is currently in state pending that means the call has been going forth to the cloud API to provision a new block volume the kubernetes already knows on which particular worker node it wants to run the whoareyou deployment and so the storage class or the CSI implementation will make sure once the persistent volume is created to mount it at the correct host this might take a few seconds we will just give it time to do so we'll use the time maybe also to look at the service that we are creating I had mentioned or I had to find that it is of type load balancer that means the service will receive an external IP address I mentioned of course this is a static IP address it will be reserved inside of your account and you can also see this particular IP address stand inside the IP manager if you open the data center designer and I am exposing this service at port 80 so since its take a little longer than I'm used to we might actually go into the question and answer session already and then I will get back to you at this particular example okay so we might wait a little quickly a question the answer session the persistent volume has now been created we see the new object appearing here with the size that I have defined the readwrite once access mode and it is actually bound to the persistent volume claim or to the pots that I have been creating and the service is also actually available the static IP address has been reserved and is shown in the overview so let's try to access the service directly with the IP address should be available on port 80 maybe the pot needs to be created first we might wait for the pot to get the status running there it is running and we can see that the service is available via the service type load balancer via the static IP address that is automatically assigned and attached to this particular host it says it's a small implementation that we have done with the cloud control manager inside of kubernetes which basically assigns the static IP address to any of the pots running inside of your or to any of the worker nodes running inside of your kubernetes cluster it will also take care in case this worker node is unavailable or you drain or toured in it then this IP address will automatically be assigned to the next worker node so Support then also during the better we like for you to encourage you to freely use this better it's free of charge for our existing customers this feature is already available so simply use the API reference that I had shown you to create your first cluster for our new customers you can sign up through our website also the link is here on the slides that we will send out give it a try it is completely free for this month until the commercial launch which is planned then for beginning of September of course incidents will happen you might have some problem with your cluster please use our 24/7 support for that as reference here you can contact them via this specified phone number or via the normal email address as well if you have any architectural questions or want to get a deeper overview of our service my colleague and okoma and myself were also here to help you you can either yeah most likely simply send us an email and we can schedule a personalized web session with you so this was in general to the the the topic about the introduction and now I will go into the questions