<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xmlns:tt="http://teletype.in/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>@snehacynix</title><generator>teletype.in</generator><description><![CDATA[@snehacynix]]></description><link>https://teletype.in/@snehacynix?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><atom:link rel="self" type="application/rss+xml" href="https://teletype.in/rss/snehacynix?offset=0"></atom:link><atom:link rel="next" type="application/rss+xml" href="https://teletype.in/rss/snehacynix?offset=10"></atom:link><atom:link rel="search" type="application/opensearchdescription+xml" title="Teletype" href="https://teletype.in/opensearch.xml"></atom:link><pubDate>Sun, 05 Apr 2026 06:02:14 GMT</pubDate><lastBuildDate>Sun, 05 Apr 2026 06:02:14 GMT</lastBuildDate><item><guid isPermaLink="true">https://teletype.in/@snehacynix/ZCh2xzOfL</guid><link>https://teletype.in/@snehacynix/ZCh2xzOfL?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/ZCh2xzOfL?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>File Handling in Ruby</title><pubDate>Tue, 07 Jul 2020 13:16:46 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/e0/8d/e08d8544-31f9-446b-a77b-423d2f7f3a66.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/82/a8/82a8abef-5499-46ce-acde-980a8c23002f.png"></img>It is a way of processing a file such as creating a new file, reading content in a file, writing content to a file, appending content to a file, renaming the file and deleting the file. More additional Information On Ruby On Rails Course]]></description><content:encoded><![CDATA[
  <figure class="m_original">
    <img src="https://teletype.in/files/82/a8/82a8abef-5499-46ce-acde-980a8c23002f.png" width="471" />
  </figure>
  <p>It is a way of processing a file such as creating a new file, reading content in a file, writing content to a file, appending content to a file, renaming the file and deleting the file. More additional Information On <a href="https://onlineitguru.com/ruby-on-rails-online-training-placement.html" target="_blank"><strong>Ruby On Rails Course</strong></a></p>
  <p><strong>Common modes for File Handling</strong><br /><strong>“r”</strong> : Read-only mode for a file.<br /><strong>“r+”</strong> : Read-Write mode for a file.<br /><strong>“w”</strong> : Write-only mode for a file.<br /><strong>“w+”</strong> : Read-Write mode for a file.<br /><strong>“a”</strong> : Write-only mode, if file exists it will append the data otherwise a new file will be created.<br /><strong>“a+”</strong> : Read and Write mode, if file exists it will append the data otherwise a new file will be created.</p>
  <p><strong><em>Syntax</em></strong></p>
  <pre>fileobject = File.new(&quot;filename.txt&quot;, &quot;mode&quot;)
fileobject.syswrite(&quot;Text to write into the file&quot;)
fileobject.close()</pre>
  <p><strong>Below is the implementation for creating a new file and writing into it.</strong></p>
  <p><code># File Handling Program</code></p>
  <p><code># Creating a file</code></p>
  <p><code>fileobject = File.<strong>new</strong>(&quot;sample.txt&quot;, &quot;w+&quot;);</code></p>
  <p><code># Writing to the file</code></p>
  <p><code>fileobject.syswrite(&quot;File Handling&quot;);</code></p>
  <p><code># Closing a file</code></p>
  <p><code>fileobject.close();</code></p>
  <p><strong>Description:</strong><br />A text file named sample.txt is created with read and write permission. The content “File Handling” is written to the file using syswrite method. Finally close the file. When you open the sample.txt file, it will contain the string “File Handling”.</p>
  <p><strong><em>Syntax</em></strong></p>
  <pre>fileobject = File.new(&quot;filename.txt&quot;, &quot;r&quot;)
fileobject.sysread(20)
fileobject.close()</pre>
  <p><strong>Below is the implementation for reading the content from a file.</strong></p>
  <p><code># File Handling Program</code></p>
  <p><code># Opening a file</code></p>
  <p><code>fileobject = File.open(&quot;sample.txt&quot;, &quot;r&quot;);</code></p>
  <p><code># Reading the first n characters from a file</code></p>
  <p><code>puts(fileobject.sysread(21));</code></p>
  <p><code># Closing a file</code></p>
  <p><code>fileobject.close();</code></p>
  <p><code># Opening a file</code></p>
  <p><code>fileobject = File.open(&quot;sample.txt&quot;, &quot;r&quot;);</code></p>
  <p><code># Read the values as an array of lines</code></p>
  <p><code>print(fileobject.readlines);</code></p>
  <p><code>puts</code></p>
  <p><code># Closing a file</code></p>
  <p><code>fileobject.close();</code></p>
  <p><code># Opening a file</code></p>
  <p><code>fileobject = File.open(&quot;sample.txt&quot;, &quot;r&quot;);</code></p>
  <p><em>To get in-depth knowledge on <a href="https://onlineitguru.com/ruby-on-rails-online-training-placement.html" target="_blank"><strong>Ruby On Rails Online Training</strong></a></em></p>
  <p><code># Read the entire content from a file</code></p>
  <p><code>print(fileobject.read());</code></p>
  <p><code># Closing a file</code></p>
  <p><code>fileobject.close();</code></p>
  <p><strong>Output:</strong></p>
  <figure class="m_original">
    <img src="https://proseful.imgix.net/blogs/a0350674-66b9-4580-814b-f69567fab55c/images/0248c88a-0b5e-477b-a2b0-3c6f93f1cee4.png?fit=max&q=80&w=720&s=50051401df6ae35d72404ad41f89a7ec" />
  </figure>
  <p><strong>Description:</strong><br />The sample text file contains the string “File handling in Ruby language”. The file is opened with read-only permission. The above output shows the different way of reading the file and print it. The sysread method read only 21 characters and then print the first 21 characters. The readlines method read the values as an array of lines. The read method read the entire content as a string from the file. <a href="https://onlineitguru.com/ruby-on-rails-online-training-placement.html" target="_blank"><strong>ruby on rails training</strong></a> for more skills and techniques from industrial experts.</p>
  <p><code># Rename the file name</code></p>
  <p><code>puts File.rename(&quot;sample.txt&quot;, &quot;newSample.txt&quot;)</code></p>
  <p><code># Delete the existing file</code></p>
  <p><code>puts File.delete(&quot;sample1.txt&quot;)</code></p>
  <p><code># Checking the old filename is existing or not</code></p>
  <p><code>puts File.file?(&quot;sample.txt&quot;)</code></p>
  <p><code># Checking the renamed file is exiting or not</code></p>
  <p><code>puts File.file?(&quot;newSample.txt&quot;)</code></p>
  <p><code># Checking the file have read permission</code></p>
  <p><code>puts File.readable?(&quot;newSample.txt&quot;)</code></p>
  <p><code># Checking the file have write permission</code></p>
  <p><code>puts File.writable?(&quot;newSample.txt&quot;)</code></p>
  <p><strong>Output:</strong></p>
  <figure class="m_original">
    <img src="https://proseful.imgix.net/blogs/a0350674-66b9-4580-814b-f69567fab55c/images/e2615abf-9ff9-4ec2-916a-6c0d28056551.png?fit=max&q=80&w=720&s=586a8f2eb7dcfd2fde2ac1b6f1f2d5c1" />
  </figure>
  <p><strong>Description:</strong><br />The rename method is used to rename the file contain the old name with the new name and the rename method print ‘0’ for the file is renamed, otherwise print ‘1’.The delete method is used to delete the exiting file and it will print ‘1’ if the file is deleted otherwise print ‘0. The file? method is used to check if the file is exists or not. It will return false if the file is not exists and otherwise true. In our case the sample.txt text file is not exists because we renamed it to newSample.txt file so it will return false. The newSample file is existing, so it will return true. The newSample file is having both read and write permission, so it will return true for last two statement.</p>
  <p>Take your career to new heights of success with an Ruby, enroll for live free demo on <a href="https://onlineitguru.com/ruby-on-rails-online-training-placement.html" target="_blank"><strong>Ruby On Rails Training</strong></a></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/gYrhen6_Q</guid><link>https://teletype.in/@snehacynix/gYrhen6_Q?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/gYrhen6_Q?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>Setup Kubernetes cluster using kubeadm in vSphere virtual machines.</title><pubDate>Fri, 03 Jul 2020 12:31:06 GMT</pubDate><description><![CDATA[<img src="https://teletype.in/files/92/0f/920f1613-6874-40f3-b6bd-f45b3cf12e36.png"></img>Let us create the required number of virtual machines for setting up cluster using the preferred operating system. Here, I am going with Ubuntu-18.04.3. I have planned to setup a cluster using single control plane(master) and three worker nodes.]]></description><content:encoded><![CDATA[
  <h2>Create Virtual Machines.</h2>
  <p>Let us create the required number of virtual machines for setting up cluster using the preferred operating system. Here, I am going with Ubuntu-18.04.3. I have planned to setup a cluster using single control plane(master) and three worker nodes.</p>
  <p>Each node should be equipped with at least 2GB memory, 20GB disk space and 2vCPUs. To make the disk space usage optimal in VMware, enable thin provisioning while creating virtual disk. <strong><a href="https://onlineitguru.com/kubernetes-training.html" target="_blank">Kubernetes online training</a></strong> helps you to learn more effectively.</p>
  <p>Let us customise the virtual machines with the preferred configuration and start booting through ISO. Once the virtual machines are created successfully, go ahead with the below steps to configure a Kubernetes cluster. </p>
  <h2>Setup Networking</h2>
  <p>Based on your networking solution, configure network settings in the virtual machines. Ensure that all the machines are connected to each other.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/92/0f/920f1613-6874-40f3-b6bd-f45b3cf12e36.png" width="228" />
    <figcaption>Kubernetes</figcaption>
  </figure>
  <h2>Setup hostname(Optional)</h2>
  <p>Setup meaningful hostname in all the nodes if necessary.</p>
  <pre>sudo hostnamectl set-hostname &lt;hostname&gt;</pre>
  <p>Reboot the machine to make the change effective.</p>
  <h2>Enable ssh on the machines</h2>
  <p>If ssh is not configured, install openssh-server on the virtual machines and enable connectivity between them.</p>
  <pre>sudo apt-get install openssh-server -y</pre>
  <h2>Disable swap on the virtual machines.</h2>
  <p>As a super user, disable swap on all the machines. Execute the below command to disable <em>swap </em>on the machines.</p>
  <pre>swapoff -a</pre>
  <p>In order to disable <em>swap</em> permanently , comment out <em>swap</em> entry in <code>/etc/fstab</code>file.</p>
  <p>This can be verified using the following command.</p>
  <pre>root@host1:~# free -h              total        used        free      shared  buff/cache   availableMem:           7.8G        990M        6.0G         13M        797M        6.6GSwap:          2.0G          0B        2.0G</pre>
  <p>Note: This has to be done on all the machines.</p>
  <h2>Install necessary Packages</h2>
  <p>Let us install <code>curl</code> and &#x60;apt-transport-https&#x60; in all the machines.</p>
  <pre>sudo apt-get update &amp;&amp; sudo apt-get install -y apt-transport-https curl</pre>
  <p>Obtain the Key for the kubernetes repository and add it to your local key-manager by executing the below command.</p>
  <pre>root@host1:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -OK</pre>
  <p>After adding the above key, execute the below command to add the kubernetes repo to your local system.</p>
  <pre>cat &lt;&lt;EOF | sudo tee /etc/apt/sources.list.d/kubernetes.listdeb https://apt.kubernetes.io/ kubernetes-xenial mainEOF</pre>
  <h2>kubeadm, kubectl and kubelet installation</h2>
  <p>After adding the above install <code>kubeadm, kubelet and kubectl</code> in all the machines.</p>
  <pre>sudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectl</pre>
  <p>After installing the above packages, let us hold them as it is in the machine by executing the following command.</p>
  <pre>root@host1:~# sudo apt-mark hold kubelet kubeadm kubectlkubelet set on hold.kubeadm set on hold.kubectl set on hold.</pre>
  <h2>Install Container Runtime</h2>
  <p>In each node, container runtime (CRI) component should be installed to manage the containers. In this setup, I will install the container runtime &#x60;docker&#x60; by executing the below command.</p>
  <pre>sudo apt-get install docker.io -y</pre>
  <h2>Install Control plane</h2>
  <p>In the master node, execute <code>kubeadm init</code> command to deploy control plane components</p>
  <pre>kubeadm init --pod-network-cidr=192.168.2.0/16</pre>
  <p>When the above command execution is successful, it will yield a command to be executed on all the worker nodes to configure them with the master.</p>
  <h2>Worker nodes.</h2>
  <p>After configuring the master node successfully, configure the worker nodes by executing the <em>join </em>command displayed in master node.</p>
  <pre>kubeadm join x.x.x.x:6443 --token &lt;token&gt;\    --discovery-token-ca-cert-hash &lt;hash&gt;</pre>
  <h2>Accessing Cluster</h2>
  <p>You can communicate with the cluster components using <code>kubectl</code> interface. In order to communicate, you need kubernetes cluster config file to be placed in the <code>home</code> directory of the user from where you want to access the cluster. Once the cluster is created, a file named <code>admin.conf</code> will be generated in <code>/etc/kubernetes</code> directory. This file has to be copied to the home directory of target user. <strong><a href="https://onlineitguru.com/kubernetes-training.html" target="_blank">Kubernetes online course</a></strong> helps you to learn more effectively.</p>
  <p>Let us execute the below commands from the non-root user to access cluster from that respective user.</p>
  <pre>mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config</pre>
  <p>After setting up the kubeconfig file , check the node status. All the machines will be in not ready state.</p>
  <pre>k8s@master:~$ kubectl get nodesNAME          STATUS     ROLES    AGE     VERSIONmaster   NotReady   master   5m41s   v1.17.2host1    NotReady   &lt;none&gt;   3m2s    v1.17.2host2    NotReady   &lt;none&gt;   2m58s   v1.17.2host3    NotReady   &lt;none&gt;   2m54s   v1.17.2</pre>
  <p>And you can observe that coredns pod is not started.</p>
  <pre>NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGEkube-system   coredns-6955765f44-9nlw5              0/1     Pending   0          4m33skube-system   coredns-6955765f44-wjxj2              0/1     Pending   0          4m33skube-system   etcd-master                      1/1     Running   0          4m45skube-system   kube-apiserver-master            1/1     Running   0          4m45skube-system   kube-controller-manager-master   1/1     Running   0          4m45skube-system   kube-proxy-bzcbw                      1/1     Running   0          2m6skube-system   kube-proxy-clmpz                      1/1     Running   0          2m14skube-system   kube-proxy-crx5v                      1/1     Running   0          4m32skube-system   kube-proxy-xcmlv                      1/1     Running   0          2m10skube-system   kube-scheduler-master            1/1     Running   0          4m45s</pre>
  <p>This will be resolved when you deploy network CNI plugin in the cluster. Here, I will deploy <em>calico</em> by executing the following command in the master node.</p>
  <pre>kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml</pre>
  <p>In next few minutes, your cluster will be created successfully. Check the node status and</p>
  <p>ensure the successful creation.</p>
  <pre>k8s@master:~$ kubectl get nodesNAME          STATUS   ROLES    AGE   VERSIONmaster   Ready    master   50m   v1.17.2host1    Ready    &lt;none&gt;   47m   v1.17.2host2    Ready    &lt;none&gt;   47m   v1.17.2host3    Ready    &lt;none&gt;   47m   v1.17.2</pre>
  <p>You can check the cluster state by executing the following command.</p>
  <pre>k8s@master:~$ kubectl get pods --all-namespacesNAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGEdefault       abc1-b95b76d84-2qmhw                       1/1     Running   0          2m41skube-system   calico-kube-controllers-5c45f5bd9f-r9rxj   1/1     Running   0          4m59skube-system   calico-node-bd4tx                          1/1     Running   0          5mkube-system   calico-node-lxk75                          1/1     Running   0          5mkube-system   calico-node-zmnn4                          1/1     Running   0          5mkube-system   calico-node-zzvhk                          1/1     Running   0          5mkube-system   coredns-6955765f44-9nlw5                   1/1     Running   0          10mkube-system   coredns-6955765f44-wjxj2                   1/1     Running   0          10mkube-system   etcd-master                           1/1     Running   0          10mkube-system   kube-apiserver-master                 1/1     Running   0          10mkube-system   kube-controller-manager-master        1/1     Running   0          10mkube-system   kube-proxy-bzcbw                           1/1     Running   0          8m19skube-system   kube-proxy-clmpz                           1/1     Running   0          8m27skube-system   kube-proxy-crx5v                           1/1     Running   0          10mkube-system   kube-proxy-xcmlv                           1/1     Running   0          8m23skube-system   kube-scheduler-master                 1/1     Running   0          10m</pre>
  <p>Now, the kubernetes cluster has been created successfully. You can verify this by setting up a deployment/pod.</p>
  <pre>k8s@master:~$ kubectl create deploy nginx --image=nginxdeployment.apps/nginx created</pre>
  <p>You can check the pod status by executing the below command.</p>
  <pre>k8s@master:~$ kubectl get podsNAME                     READY   STATUS    RESTARTS   AGEnginx-86c57db685-rpzm2   1/1     Running   0          70s</pre>
  <h2>Deleting cluster.</h2>
  <p>Kubernetes cluster can be teared down by executing the below single command.</p>
  <pre>sudo kubeadm reset</pre>
  <p>Thus, a cluster can be deleted.  <strong><a href="https://onlineitguru.com/kubernetes-training.html" target="_blank">online kubernetes course</a></strong> will help you to learn more skills and techniques from industrial experts. </p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/uQ0nZ7HH76</guid><link>https://teletype.in/@snehacynix/uQ0nZ7HH76?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/uQ0nZ7HH76?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>Kubernetes vs OpenShift</title><pubDate>Wed, 01 Jul 2020 09:48:40 GMT</pubDate><description><![CDATA[<img src="https://teletype.in/files/c1/54/c154f7cd-c845-4d5c-be4e-c01495107829.png"></img>
Kubernetes and OpenShift have a lot in common. Actually OpenShift is more or less Kubernetes with some additions. But what exactly is the difference?]]></description><content:encoded><![CDATA[
  <p><br />Kubernetes and OpenShift have a lot in common. Actually OpenShift is more or less Kubernetes with some additions. But what exactly is the difference?</p>
  <p>It’s not so easy to tell as both products are moving targets. The delta changes with every release - be it of Kubernetes or OpenShift. I tried to find out and stumbled across a few blog posts here and there. But they all where based on not so recent versions - thus not really up-to-date.</p>
  <p>So I took the effort to compare the most recent versions of Kubernetes and OpenShift.<br />Before we dive into the comparision let me clarify what we are actually talking about. I will focus on bare Kubernetes, i.e. I will ignore all additions and modifications that come with the many distributions and cloud based solutions. On the other hand I will talk about Red Hat OpenShift Container Platform (OCP), being the enterprise product derived from OKD aka The Origin Community Distribution of Kubernetes that powers Red Hat OpenShift, previously know as OpenShift Origin. For more info <a href="https://onlineitguru.com/kubernetes-training.html" target="_blank"><strong>Kubernetes online training</strong></a></p>
  <figure class="m_original">
    <img src="https://teletype.in/files/c1/54/c154f7cd-c845-4d5c-be4e-c01495107829.png" width="600" />
  </figure>
  <p>Kubernetes vs OpenShift</p>
  <p><strong>Base</strong><br />Both products differ in the environment they can run in. OpenShift is limited to Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux Atomic Host. I suppose this limitation is less due to technical reasons, but because Red Hat wants to make supporting OpenShift more viable. This assumption is supported by the fact that OKD also can be installed on Fedora and CentOS.</p>
  <p>On the other hand Kubernetes doesn’t impose many requirements concerning the underlying OS. Its package manager should be RPM or deb based - which means practically every popular Linux distribution. But probably you better stick to the most often used distributions Fedora, CentOS, RHEL, Ubuntu or Debian.</p>
  <p>This applies for so-called bare metal installations (including virtual machines). It should be mentioned that creating a Kubernetes cluster on that level requires quite some effort and skills. effective way of learning from[*  [Kubernetes certification training https://onlineitguru.com/kubernetes-training.html] ]</p>
  <p><br />But this is the age of cloud computing, so deploying Kubernetes on an IaaS platform or even using managed Kubernetes clusters are also practicable approaches. Kubernetes can be deployed on any major IaaS platform: AWS, Azure, GCE, ….</p>
  <p>Compared to that there is only a limited selection of OpenShift service providers. OpenShift Online where you get your own projects on a shared OpenShift cluster and OpenShift Dedicated to get your own dedicated OpenShift cluster in the cloud - the latter based on Amazon Web Services (AWS). If you try really hard you also find a few more providers, like T-System’s AppAgile PaaS, DXC’s Managed Container PaaS, Atos’ AMOS and Microsoft’s OpenShift - the latter two only announced yet. Like Kubernetes OpenShift can also be run on the all major IaaS platforms.</p>
  <p><strong>Rollout</strong><br />Rolling out Kubernetes is not an easy task. As a consequence of the multitude of platforms it runs on, together with the diversity of options for additional required services there is an impressive list of ‘turnkey solutions’ promising to facilitate creating Kubernetes clusters on premises with only a few commands’. Most (if not all) are based on one of the following installers.</p>
  <p><strong>RKE (Rancher Kubernetes Everywhere)</strong></p>
  <p>Installer of Rancher Kubernetes distribution<br />kops: Installer maintained by the Kubernetes project itself to roll out OpenShift on AWS and (with limitations) also on GCP.<br />kubespray: Community project of a Kubernetes installer for bar metal and most clouds based on Ansible and kubeadm.<br />kubeadm: Is also an installer provided by the Kubernetes project. It’s more focused on bare metal and VMs. It imposes some prerequisites concerning the machines and is less of a ‘do it all in one huge leap’ tool. It can also be used to add and remove a single node to / from an existing cluster.<br />kube-up.sh: deprecated predecessor of kops<br />OpenShift on the other hand aims to be a full-fletched cluster solution without the need to install additional components after the initial rollout. Apparently it is mainly targeted towards manual installation on physical (or virtual) machines. Consequently it comes with its own installer based on Ansible. It does a decent job installing OpenShift based on only a minimal set of configuration parameters. However, rolling out OpenShift is still a complex task. There is a plethora of options and variables to specify the properties of the intended cluster.</p>
  <p><strong>Web-UI</strong><br />When it comes to administrating the cluster and checking the status of the various resources via a web based user interface you hit one big difference between Kubernetes and OpenShift.</p>
  <p>Kubernetes offers the so called dashboard. In my opinion it’s just an afterthought. It’s not an integral part of the cluster but has to be installed separately. Additionally it’s not easily accessible. It’s not just firing up a certain URL, but you have to use kube proxy to forward a port of your local machine to the cluster’s admin server. </p>
  <p>The result is a web UI that indeed informs you about the status of many components but turns out to be of limited values for real day-to-day administrative work since it lacks virtually any means to create or update resources. You can upload YAML files to achieve that. So what’s the gain compared to using kubectl?</p>
  <p>Compared to that OpenShift’s web console truly shines. It has a login page. It can be accessed without jumping several loops. It offers the possibility to create and change most resources in a form based fashion. <a href="https://onlineitguru.com/kubernetes-training.html" target="_blank"><strong>online kubernetes course</strong></a>  will helps you to learn more skills and techniques. </p>
  <p><br />With the appropriate rights the web UI also offers you the cluster console for a cluster wide view of many resources (e.g. nodes, projects, cluster role bindings, …). However, you cannot administrate the cluster itself via the web UI (e.g. add, remove or modify nodes).</p>
  <p><strong>Integrated image registry</strong><br />There is no such thing as an integrated image registry in Kubernetes. You may setup and run your own private docker registry. But as with many additions to Kubernetes the procedure is not well documented, cumbersome and error-prone.</p>
  <p>OpenShift comes with its integrated image registry that can be used side by side with e.g. Docker Hub and Red Hat’s image registry. It is typically used to store build artifacts and thus cooperates nicely with OpenShift’s ability to build custom images (see below).</p>
  <p>Not to forget the registry console that presents you valuable information about all images and image streams, their relation to the cluster’s projects and permissions on the streams. The latter being a prerequisite for OpenShift’s ability to host multiple tenants hiding the artifacts of one projects from members of other projects.</p>
  <p><strong>Image streams</strong><br />Image streams is a concept unique to OpenShift. It allows you to reference images in image registries - either internal in your OpenShift cluster or some public registry by means of tags (not to be confused with tags of docker images). You can even reference a tag within the same image stream. The power of image streams comes from the ability to trigger actions in your cluster in case the reference behind a tag changes. Such a change can be caused either by uploading a new image to the internal image registry or by periodically checking the image tag of an external image registry. In both cases the corresponding image stream tag(s) is / are updated and the action(s) are triggered.</p>
  <p>Kubernetes has nothing like that - not even as a third party solution that can be added separately.</p>
  <p><strong>Builds</strong><br />Builds is another core concept of OpenShift not available in Kubernetes. It is realized by means of jobs of the underlying Kubernetes. We have Docker builds, source-to-image builds (S2I), pipeline builds (Jenkins) and custom builds. Builds can be triggered automatically when the build configuration changes or when a base image used in the build or the code base of a source-to-image build is updated. Typically the resulting artifact is another image that is uploaded to the internal image registry triggering further actions (like deployment of the new image).</p>
  <p>Nothing comparable exists in Kubernetes. You may craft you own image and run it as a job to mimic any of the above mentioned build types. But it will still lack the property of being triggered when some of the input gets updated or of triggering further actions.</p>
  <p><strong>Jenkins inside</strong><br />Pipeline build is a special form of source-to-image builds. It’s actually an image containing a Jenkins that monitors configured ImageStreamsTags, ConfigMaps and the build configuration and starts a jenkins build in case of any updates. Resulting artifacts are uploaded to image streams, which may automatically trigger subsequent deployments of the artifacts.</p>
  <p>As already mentioned in the previous section there is nothing like that in Kubernetes. However, you may build and deploy your own custom Jenkins image that will drive your CI / CD process. The resulting artifacts will be docker images uploaded to some image repository. By means of the Jenkins Kubernetes CLI plugin these artifacts can then be deployed in the cluster. But it’s all hand crafted.</p>
  <p><strong>Deployment of applications</strong><br />The native means of deploying an application that consists of several components (pods, services, infress, volumes, …) in Kubernetes is Helm. It is superior to OpenShift Templates. Thus OpenShift Templates can be converted into Helm Charts, but not the other way round. Apparently Helm was not designed with a security or enterprise focus in mind. So by default running Helm requires privileged pods (for Tiller) making it possible for anybody to install an application everywhere in the cluster. Helm can be tweaked to be more secure and aware of Role Based Access Control (RBAC). Helm cannot be deployed in an OpenShift cluster due to the mentioned security concerns.</p>
  <p>Actually you can deploy Helm on OpenShift but you have to jump several loops, consider (i.e. ignore) security implications and still end up with a Helm installation that is somewhat limited compared to one on Kubernetes.</p>
  <p><strong>OpenShift comes with two mechanismes to deploy applications:</strong> Templates and Ansible Playbook Bundles.</p>
  <p>Templates predate Helm Charts in Kubernetes. The concept is pretty simple. One YAML file with descriptions of all required cluster resources. The descriptions can be parameterized by means of placeholders that are substituted by concrete values during deployment.</p>
  <p>Ansible Playbook Bundles (ABP) are way more flexible. They are basically docker images with the Ansible runtime and a set of Ansible Playbooks to provision, deprovision, bind and unbind an application.</p>
  <p>Both Templates and APBs can be made available in the respective Service Broker (Template Service Broker and OpenShift Ansible Broker).</p>
  <p><strong>Service Catalog, Service Broker</strong><br />Service catalog is an optional component of Kubernetes that needs to be installed separately. After installation it needs to be wired together with existing service brokers by means of creating ClusterServiceBroker instances. Operator can then query the service catalog and provision / unprovision offered services. The service catalog of Kubernetes is more targeted at managed services offered by cloud providers, less at means to provision services within the cluster. for more <a href="https://onlineitguru.com/kubernetes-training.html" target="_blank"><strong>Kubernetes online course</strong></a></p>
  <p><strong>Exposing services</strong><br />Everything that makes a group of pods with equal functionality accessible through a well-defined gateway is called a service. In Kubernetes there is a confusing diversity of options concerning services: ClusterIp, NodePort, LoadBalancer, ExternalName, Ingress. Everything except ClusterIP are means to make a service accessible by the outside (i.e. cluster external) world. NodePort is not recommended for anything but ad-hoc access during development and troubleshooting. Ingress is a reverse proxy for HTTP(S) forwarding traffic to a certain service based on host name (virtual hosts) and / or path patterns. It also handles TLS termination and load balancing.</p>
  <p>LoadBalancer, ExternalName and Ingress are not available in Kubernetes out-of-the-box but have to be provided by third-party solutions. Typically the cloud provider that runs the underlying infrastructure of the Kubernetes cluster also offers the components required for these options. So when you run your own cluster on bare metal or with virtual machines you again have to tackle the task to install the required components by hand.</p>
  <p><strong>Basic authentication: </strong>Authentication headers in requests are verified against a clear-text(!) password file. This file also delivers group names.</p>
  <p><strong>OpenID tokens: </strong>This option requires a stand-alone identity provider that delivers ID tokens. Additionally it requires an OIDC plugin for kubectl. There is no ‘behind the scene’ communication between the Kubernetes API server and the identity provider. Instead the ID token is simply verified against the certificate of the identity provider.</p>
  <p><strong>Webhook tokens: </strong><br />Tokens are provided and verified by an authentication service that implements a simple REST-like API.</p>
  <p>Authentication proxy: Sits between client and Kubernetes API server. Adds information about authenticated user, groups and extra data as configurable request headers (e.g. X-Remote-User, X-Remote-Group) Transmission of credentials from client to proxy and actual authentication is totally up to the proxy.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/j9ZuhGeVI</guid><link>https://teletype.in/@snehacynix/j9ZuhGeVI?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/j9ZuhGeVI?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>How to use Terraform and Kubernetes to manage the IT worlds</title><pubDate>Mon, 29 Jun 2020 11:28:16 GMT</pubDate><description><![CDATA[<img src="https://teletype.in/files/ca/8a/ca8abdc1-04b7-4287-8779-c2c825ec60d2.png"></img>Terraform and Kubernetes are the next generation of DevOps tools enabling the whole new layer of DevOps services. Terraform allows literally creating or destroying the whole IT worlds…]]></description><content:encoded><![CDATA[
  <p>Terraform and Kubernetes are the next generation of DevOps tools enabling the whole new layer of DevOps services. Terraform allows literally creating or destroying the whole IT worlds…</p>
  <p>Terraform is a configuration orchestration tool released by Hashicorp, available both as an open-source DevOps solution and as an enterprise-grade DevOps-as-a-Service offer. Being a part of Hashicorp infrastructure-as-code stack, it significantly simplifies the provisioning, management, and disposal of immutable infrastructure with many cloud service providers, be it public, private or on-prem. <strong><a href="https://onlineitguru.com/kubernetes-training.html" target="_blank">Kubernetes online course</a></strong> helps you to learn more effectively.</p>
  <p>The closest analog would be the AWS CloudFormation, an infrastructure automation service. However, while CloudFormation allows composing the needed AWS infrastructure with ease, Terraform does this for any underlying virtual components, be it an AWS region, Digital Ocean droplet or a vSphere virtual data center. While spawning such a virtual world, Terraform would configure the required resources and networking, register the DNS and create the required number of virtual machines.</p>
  <figure class="m_retina">
    <img src="https://teletype.in/files/ca/8a/ca8abdc1-04b7-4287-8779-c2c825ec60d2.png" width="611" />
    <figcaption>Terraform </figcaption>
  </figure>
  <p>Normally, this is when configuration management (CM) tools like Puppet or Ansible would come into play. They would handle the creation of the OSEs (Operating System Environments), complete with installing the needed software and patching it to the required versions. Ansible fans would now mention that Ansible playbooks can do so much more — including provisioning the infrastructure in the first place — and they would be right, no doubt.</p>
  <p>However, the point is as Terraform manifests are descriptive, they are cloud-agnostic. They work with equal ease on any cloud platform, orchestrating the immutable infrastructure with an efficiency that outweighs the possibilities of other DevOps tools by far.</p>
  <p><strong>Using Kubernetes for container management<br /></strong>When we discuss composing the immutable infrastructure environments for software delivery pipelines, app containerization is one of the cornerstone topics. The whole point of Infrastructure as Code approach to DevOps services is the ability to provision new environments and launching new apps in mere seconds, instead of enduring long and laborious recovery after any malfunction. Containers are vital for that, as these are the code envelopes with everything needed to run an app – from OS to drivers and libraries. Once the Docker image composed, the Docker containers with it can be launched, stopped and multiplied per need.<strong> <a href="https://onlineitguru.com/kubernetes-training.html" target="_blank">Kubernetes certification training</a></strong> for more skills and techniques.</p>
  <p>The desired state of your infrastructure is defined in code and it’s CM tool’s job to enforce that state, through a CM agent working inside the OSE. Should the state be altered, the CM agent informs the tool, an alert is raised and some response is made.</p>
  <p>If this OSE works as a wrapper for a customer-facing application, the CM tools will install the VM agent, configure the OSE to run the application, configure the application itself, etc. If the OSE is a part of a greater container cluster, the container management tool will configure the kublet and hand it off to Kubernetes to take care afterward. The Kubernetes management tool then keeps the containers linked, builds nodes out of them, builds clusters out of nodes, handles the networking, proxies and container discovery.</p>
  <p><strong>Final thoughts on using Terraform and Kubernetes to manage the IT infrastructure</strong><br />When the required infrastructure state is described in Terraform manifests, the developers can treat it like any other code — use the versioning system to fork the required states of the infrastructure, restore them at any moment or adjust them in mere minutes. This allows to rebuild the faulty environments from a clean state with ease, should something go awry.</p>
  <p>uch an approach to software delivery shortens the development time drastically. The companies that imbued their software development with the best DevOps practices experienced nearly 50 times more frequent code deployment along with multiple other benefits, described in our article on the state of DevOps adoption as of 2017.</p>
  <p>The coin has two sides, however. Using Terraform efficiently requires also using Hashicorp Consul and Vault, or opting for custom-tailored DevOps solutions. Mastering Kubernetes is also quite a hard task, best delegated to a trustworthy managed services provider. Thus said, fitting all the pieces of the composable infrastructure puzzle in their place is quite a laborious task… yet only the sky is the limit once it is done.</p>
  <p>For example, your system can monitor the pricing discounts at various cloud providers and move all the systems to another provider in a blink of an eye once it suits you. As Terraform is cloud-agnostic, your business will get exactly the infrastructure it needs, and Kubernetes will allow juggling the containers to keep the end user’s experience uninterrupted while saving you time and money. for more techniques go through <strong><a href="https://onlineitguru.com/kubernetes-training.html" target="_blank">Kubernetes online training </a></strong><br /></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/SEe04Xott</guid><link>https://teletype.in/@snehacynix/SEe04Xott?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/SEe04Xott?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>Explain SQL replication in DataStage</title><pubDate>Wed, 24 Jun 2020 10:53:14 GMT</pubDate><description><![CDATA[Replication is a collection of technologies that allow data and database objects to be copied. Then replicate it from one database to another, and then synchronized to maintain continuity between databases. Merge replication is primarily intended for mobile applications or distributed server applications where data conflicts are possible. In this article let us study the SQL replication in DataStage. You can use SQL replication in DataStage but before that you need to install DataStage.]]></description><content:encoded><![CDATA[
  <p>Replication is a collection of technologies that allow data and database objects to be copied. Then replicate it from one database to another, and then synchronized to maintain continuity between databases. Merge replication is primarily intended for mobile applications or distributed server applications where data conflicts are possible. In this article let us study the SQL replication in DataStage. You can use SQL replication in DataStage but before that you need to install DataStage.</p>
  <p>Install DataStageThe DataStage server supports operating systems with AIX, Linux , and Windows. You may pick according to requirement.  Then you can use the Asset Interchange tool to move the data from an older version of infosphere to a new version.<strong>      <a href="https://onlineitguru.com/datastage-online-training-placement.html" target="_blank">Data stage online course</a> </strong>helps you to learn more skills and techniques.</p>
  <p><strong>Installation files </strong></p>
  <p>For Infosphere Datastage to be installed and configured, you must have the following files in your setup.In Windows,EtlDeploymentPackage-oracle.pkg for windowsEtlDeploymentPackage-db2.pkgFrom Linux,Package EtlDeployment-linux-db2.pkgEtlDeploymentPackage, oracle-linux.pkgNow that you have installed DataStage. Let us set up the SQL replication.</p>
  <p><strong>Setup SQL server Replication in DataStage</strong></p>
  <p><strong>Step 5</strong></p>
  <p>Use the following command to create Inventory table and import data into the table using the command below.Inventory.ixf import db2 from ixf builds an inventory</p>
  <p><strong>Step 6 </strong></p>
  <p>Make a table of targets. Then Name STAGED as target database.Since now you have created both the source and target databases, we&#x27;ll see how to replicate the next step. The following information can be useful when setting up a source of ODBC data.Creation of objects with SQL Replication in DataStageThe image below shows how the data change flow from source to target database is delivered. You create a source-to-target mapping of tables known as subscription set members, and group members into a subscription.</p>
  <p>The replication unit (Change Data Capture) within InfoSphere CDC is referred to as a subscription.The adjustments that have been made in the source are recorded in the &quot;Capture control table&quot; that is sent to the CD table and then to the target table. While the application program will have the specifics of the row from which changes need to be made. In the subscription package, it will also enter the CD table.A subscription contains mapping details which specify how data is being applied to a target data store in a source data store. Note, CDC is now referred to as replication of data from the Infosphere.Upon execution of a transaction, InfoSphere CDC records modifications on the source site. InfoSphere CDC delivers the change data to the target, and stores information about the sync point in the target database in a bookmark table.<strong> <a href="https://onlineitguru.com/datastage-online-training-placement.html" target="_blank">datastage administrator training</a> </strong>from industrial experts.</p>
  <p>InfoSphere CDC uses the information from the bookmark to track InfoSphere DataStage work progress.</p>
  <p>The bookmark information is used as a restart point in the event of a malfunction. In our example, the table ASN.IBMSNAP FEEDETL stores syncpoint information related to DataStage, which is used to track DataStage progress.</p>
  <p>You must do following stuff in this segment,To store replication options, build CAPTURE CONTROL tables and APPLY CONTROL tablesEnter the tables PRODUCT and INVENTORY as sources of replicationCreate a two-Member subscriptionBuild members of subscription set and aim CCD tablesUse the command line program ASNCLP to set up a SQL replication</p>
  <p><strong>Step 1</strong></p>
  <p>Locate the script file within the directory.</p>
  <p><strong>Step 2</strong> </p>
  <p>Replace your user ID and password for connecting to the SALES database in the file &lt; db2-connect-ID &gt; and &quot;&lt; password &gt;.&quot;</p>
  <p><strong>Step 3</strong></p>
  <p>Shift directories to the directory sqlrepl-datastage-tutorial / setupSQLRep and execute script. Use the Command below. The command will connect to the SALES database and generate a SQL script to create tables for the Capture control.asnclp –f crtCtlTablesCaptureServer.asnclpStep 4 Find the script file crtCtlTablesApplyCtlServer.asnclp in the same folder. Now replace the user ID and password for connecting to the STAGE DB database with two instances of &lt; db2-connect-ID &gt; and &quot;&lt; password &gt;&quot;</p>
  <p><strong>Step 5</strong> </p>
  <p>Now use the following command to construct control tables for application in the same command prompt.</p>
  <p>asnclp –f crtCtlTablesApplyCtlServer.asnclp</p>
  <p><strong>Step 6 </strong></p>
  <p>Locate the crtRegistration.asnclp script files and replace all &lt; db2-connect-ID &gt; instances with the User ID to connect to the SALES database. </p>
  <p><strong>Step 7</strong></p>
  <p>Use the following script to register the source tables. The ASNCLP program will create two CD tables as part of creating the registry. CDPRODUCT AND Distributionasnclp –f crtRegistration.asnclpThe command CREATE REGISTRATION uses the options ofDifferential Refresh: Invites Apply program to update the target table only when rows change in the source table Image Both: This option enables the value to be recorded in the source column before the change occurred, and the value after the change occurred.</p>
  <p><strong>Step 8 </strong></p>
  <p>Use the following steps to connect to the destination database (STAGEDB).Find the file crtTableSpaceApply.bat, open it in a word editorReplace the user ID and password with &lt; stagedb-connect-ID &gt; and &lt; stagedb-password &gt;Enter crtTableSpaceApply.bat in the DB2 command window, and run the file.This batch file creates a new destination database tablespace (STAGEDB)</p>
  <p><strong>Step 9</strong></p>
  <p>You need to Locate the script files crtSubscriptionSetAndAddMembers.asnclp. Then make the following modifications.</p>
  <p>Replace all &lt; sales-connect-ID &gt; and &lt; sales-password &gt; instances with a user ID and password to connect to the SALES database (source).</p>
  <p>Replace all &lt; stagedb-connect-ID &gt; and &lt; stagedb-password &gt; instances with the User ID to connect to the STAGEDB database (target).After changes, the script will run to build a subscription set (ST00), which will bring the source and goal tables together. The script also generates two subscription set members, and the target database CCD (consistent update data), which will store the changed data. Infosphere DataStage will consume those data.Step 10 Run the script to create tables for subscriptions, subscription-set members, and CCDs.asnclp –f crtSubscriptionSetAndAddMembers.</p>
  <p>asnclpVarious options and two members used to create a subscription set includeFull down on condensedOutsideImport load type ExportSteady pacingStep 11Because the replication administration devices are faulty. In the IBMSNAP SUBS SET control table, you will execute another batch file to set the TARGET CAPTURE SCHEMA column to null.Locate the updateFileTgtCapSchema.bat. Using a text editor to open it. To connect to the STAGE DB database replace &lt;stage db-connect-ID&gt; and &lt;stage db-password&gt; with the user Name.Enter the updateTgtCapSchema.bat command in the DB2 command window, and execute the file.</p>
  <p><strong>Conclusion </strong></p>
  <p>I hope you reach a conclusion about setting up SQL in DataStage. You can learn more through<strong> <a href="https://onlineitguru.com/datastage-online-training-placement.html" target="_blank">DataStage Online Training</a>.</strong></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/zR702zxFa</guid><link>https://teletype.in/@snehacynix/zR702zxFa?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/zR702zxFa?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>How To Resolve Recovery Pending In SQL Server 2020</title><pubDate>Tue, 23 Jun 2020 09:24:57 GMT</pubDate><description><![CDATA[If you are not aware of what is going on with your database then it is better to solve this issue immediately. Here, you will find all possible ways to solve SQL Database restoring issue. sql server dba training helps you to learn more skills and techniques.]]></description><content:encoded><![CDATA[
  <h4>When using SQL Database, most of the users face “Pending state error” which further stops database working.</h4>
  <p>If you are not aware of what is going on with your database then it is better to solve this issue immediately. Here, you will find all possible ways to solve SQL Database restoring issue. <a href="https://onlineitguru.com/sql-server-dba-training.html" target="_blank"><strong>sql server dba training</strong></a> helps you to learn more skills and techniques.</p>
  <p>SQL Database is one of the most advanced and organized databases that maintain data integrity in a protected mode. But as a SQL server-client, you need to be mindful that it faces a few errors, and one of the most common and documented errors is the Recovery Pending State SQL Server.First of all, we should know the reasons behind the “SQL Database Pending Recovery Error”. Let us begin!</p>
  <p>Recovery Pending State in SQL Server reasons of occurrence-</p>
  <p>Logfile corruption.</p>
  <p>Full of memory space.</p>
  <p>When the database is not properly shutdown.</p>
  <p>Lack of space for database partition.</p>
  <p>When some tasks remain unclosed while shutting down.</p>
  <p>Corruption of MDF Files.</p>
  <p>Now, it is time to know all about the database states. There are 3 states of SQL server Database that makes database damaged if single or multicore MDF/NDF files get corrupted.</p>
  <p><strong>Three SQL Server Database States-</strong></p>
  <p>Online: The database will remain available and online if a single file is damaged and can not be accessed.</p>
  <p>Suspect: If the transaction log file gets damaged or causes recovery obstructions or prevents transaction rollback from being done, it will result in SQL database failure.</p>
  <p>Recovery Pending: The SQL server must run the recovery of the database, but due to some causes it is prevented from starting.</p>
  <p>Now after knowing all the states of the database, move forward to know the ways for solving the SQL Database Restoring issue manually.</p>
  <p>Alert!!! Make sure that the proper data backups are created before performing any manual solution on the SQL server database. So that data still be available in case of any mistake. Below manual methods can only be done if a user has a profound technical knowledge of the topic. If the user is getting confused with the manual approaches then here is another direct approach also given. <a href="https://onlineitguru.com/sql-server-dba-training.html" target="_blank"><strong>sql database administrator training</strong></a> from industrial experts.</p>
  <p>Manual Tactics to solve “Pending State Error”</p>
  <p>Method 1: Setting SQL Database in an Emergency Mode. Run the below queries:</p>
  <p>ALTER DATABASE [D–BName] SET EMERGENCY;</p>
  <p>GO</p>
  <p>ALTER DATABASE [D–BName] set single_user</p>
  <p>GO</p>
  <p>DBCC CHECKDB ([DB–Name], REPAIR_ALLOW_DATA_LOSS) WITH ALL_ERRORMSGS;</p>
  <p>GO</p>
  <p>ALTER DATABASE [DB–Name] set multi_user</p>
  <p>GO</p>
  <p>This Emergency mode will mark the database as READ-ONLY mode, disable logging and making system administrators accessible. DBA is only allowed to connect at this time.</p>
  <p>Every technical problem can be solved by entering into an emergency mode and beginning server recovery. And the server will come out of the EMERGENCY mode automatically.</p>
  <p>Method 2: Setting the database in an Emergency state, detach state and then reassemble the database. Run the below query:</p>
  <p>ALTER DATABASE [DB–Name] SET EMERGENCY;</p>
  <p>ALTER DATABASE [DB–Name] set multi_user</p>
  <p>EXEC sp_detach_db ‘[DB–Name]’</p>
  <p>EXEC sp_attach_single_file_db @DBName = ‘[DB–Name]’, @physname = N&#x27;[mdf path]’</p>
  <p>These above steps delete the corrupted logs automatically and build a new one.</p>
  <p>If you have successfully implemented both approaches so far, the solution may be resolved. If not, then moving towards a safer and better solution is recommended, which is an automated process.</p>
  <p><strong>Safe and Quick Process-</strong></p>
  <p>It is advised to turn to a secure, reliable, and automated third-party solution for a better solution. Use SQL Database Recovery utility to solve SQL Database Pending Recovery Error as it recovers the corrupted MDF and NDF files quickly.</p>
  <p><strong>Features:</strong></p>
  <p>Performs MDF file recovery in two modes i.e advanced and standard mode.</p>
  <p>Recovers all file objects including tables, views, stored procedure, programmability, triggers, default, and functions.</p>
  <p>This utility saves recovered data in SQL Server compatible Script format or either in the SQL Server database format.</p>
  <p>It is compatible with Windows 10, 8, 7, Vista, 2003, XP and 2000</p>
  <p>Also, it supports the SQL Server version such as 2000, 2005, 2008, 2012, 2014, 2016.</p>
  <p><strong>Final Thoughts</strong></p>
  <p>In this blog, we addressed the most awaited question “How to solve SQL Database Pending Recovery Error”. Also, we have given reasons behind this error with manual approaches as well. If users don’t have technical knowledge then they may opt for a direct and safer approach as explained above in this blog. <a href="https://onlineitguru.com/sql-server-dba-training.html" target="_blank"><strong>SQL server dba online course</strong></a> helps you to learn more effectively.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/HwNVFxFeH</guid><link>https://teletype.in/@snehacynix/HwNVFxFeH?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/HwNVFxFeH?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>How to Prepare for Certified Kubernetes Application Developer (CKAD) Exam?</title><pubDate>Mon, 22 Jun 2020 14:04:47 GMT</pubDate><description><![CDATA[Kubernetes has emerged as a promising container orchestration platform preferred by many enterprises for automating the management of containerized applications. Most important of all, enterprises adopting DevOps approaches are more likely to use Kubernetes. According to the 2018 cloud predictions by Forrester Research, Kubernetes has clearly emerged as a winner among different tools for container orchestration.]]></description><content:encoded><![CDATA[
  <p>Kubernetes has emerged as a promising container orchestration platform preferred by many enterprises for automating the management of containerized applications. Most important of all, enterprises adopting DevOps approaches are more likely to use Kubernetes. According to the 2018 cloud predictions by Forrester Research, Kubernetes has clearly emerged as a winner among different tools for container orchestration.</p>
  <h3>Target Audience for the CKAD Exam</h3>
  <p>The objective of certified Kubernetes application developer training should comply with the basic objective of the exam. The CKAD exam tests the abilities of users in designing, developing, configuring, and exposing cloud-native applications for Kubernetes. The ideal target audience for this exam involves a candidate aspiring to prove skills in the definition of application resources. <a href="https://onlineitguru.com/kubernetes-training.html" target="_blank"><strong>kubernetes online training</strong></a> helps you to learn more skills and techniques.</p>
  <p>The candidates should also have an interest in showcasing skills for using core primitives in building, monitoring, and troubleshooting scalable applications and tools on Kubernetes. Basically, the CKAD exam primarily aims at verifying that certified candidates have the required knowledge, experience, and skills for performing the responsibilities of Kubernetes application developers.</p>
  <h3>Requirements for the CKAD Exam</h3>
  <p>Candidates should know the important requirements for the exam before starting their CKAD exam preparation schedule. The prerequisites for the exam establish the capabilities of the candidate to take on the exam. You can think of the recommended requirements for the exam as vital support for your readiness for the CKAD exam. The candidates for the CKAD exam should have fluency in the following.</p>
  <ul>
    <li>Cloud-native application concepts and architectures</li>
    <li>OCI-Compliant Container Runtime such as rkt or Docker</li>
    <li>At least one programming language such as Java, Python, Go or Node.js</li>
  </ul>
  <p>Most important of all, the certified Kubernetes application developer exam preparation of candidates should focus on preparing in a hands-on, command-line environment. The exam also implies the requirements for having knowledge regarding microservices architecture and container runtimes.</p>
  <h3>Basic Exam Details</h3>
  <p>The next crucial information for candidates to ensure the best possible training for the CKAD certification exam refers to exam details. Candidates will have to appear for the Certified Kubernetes Application Developer certification exam in an online, proctored environment. The exam would include various performance-based tasks, and candidates will have to solve the problems in the tasks in a command line.</p>
  <p>The total duration of the exam is 2 hours. Candidates will have to pay a registration fee of $300 to take the exam, and it also includes the option of one free retake. Awareness of such basic information can help candidates with certified Kubernetes application developer exam preparation effectively.</p>
  <h3>Domains Covered in the CKAD Exam</h3>
  <p>The next important concern that candidates should take into account in a certified Kubernetes application developer preparation guide refers to the exam objectives. Candidates should know about the topics they have to study for an exam to prepare effectively for it. Actually, candidates can find the exam objectives in the outline of exam domains that reflect on the abilities tested in an exam. Here are the important domains covered in the Certified Kubernetes Application Developer certification exam.</p>
  <ul>
    <li>Core Concepts</li>
    <li>Configuration</li>
    <li>Multi-container Pods</li>
    <li>Observability</li>
    <li>Pod Design</li>
    <li>Services and Networking</li>
    <li>State Persistence</li>
  </ul>
  <p>In addition to the exam domains, candidates also need more information to ensure the best results from certified Kubernetes application developer exam preparation. Candidates should know about the subtopics and weighting of each domain. Why? The knowledge of subtopics and the weighting of each domain can help candidates in ensuring an organized preparation schedule. Furthermore, the impression of the weighting for each domain can help candidates in distributing their preparation efforts for different domains. Let us find out what’s in store in each domain of the CKAD certification exam.</p>
  <h4>Domain 1: Core Concepts</h4>
  <p>The first domain of the CKAD exam accounts for 13% of the total questions in the exam. The subtopics covered in this domain include the following,</p>
  <ul>
    <li>Understanding Kubernetes API primitives</li>
    <li>Creation and configuration of basic Pods</li>
  </ul>
  <h4>Domain 2: Configuration</h4>
  <p>The second domain in the CKAD certification exam deals with configuration. This domain accounts for almost 18% of the questions in the exam. The subtopics covered in this domain are as follows,</p>
  <ul>
    <li>Understanding ConfigMaps</li>
    <li>Understanding SecurityContexts</li>
    <li>Definition of an application’s resource requirements</li>
    <li>Creation and consumption of Secrets</li>
    <li>Understanding ServiceAccounts</li>
  </ul>
  <h4>Domain 3: Multi-Container Pods</h4>
  <p>This domain is significant for certified Kubernetes application developer exam preparation as it accounts for 10% of questions in the exam. The subtopics covered in this domain are as follows,</p>
  <ul>
    <li>· Understanding Multi-Container Pod design patterns such as sidecar, ambassador or adapter</li>
  </ul>
  <h4>Domain 4: Observability</h4>
  <p>The topic of observability is also a prominent domain in the CKAD exam. Candidates can see it clearly that the domain accounts for almost 18% of the total questions in the exam. The subtopics covered in this domain are as follows,</p>
  <ul>
    <li>Understanding LivenessProbes and ReadinessProbes</li>
    <li>Understanding container logging</li>
    <li>Understanding the approaches for monitoring of applications in Kubernetes</li>
    <li>Understanding debugging in Kubernetes</li>
  </ul>
  <h4>Domain 5: Pod Design</h4>
  <p>This is undoubtedly one of the crucial domains in any certified Kubernetes application developer study guide. The domain accounts for almost 20% of the total questions in the CKAD certification exam. The subtopics covered in this domain are as follows,</p>
  <ul>
    <li>Understanding deployments and the methods for performing rolling updates</li>
    <li>Understanding the methods to perform rollbacks</li>
    <li>Understanding Jobs and CronJobs</li>
    <li>Understanding the methods to use Labels, Annotations, and Selectors</li>
  </ul>
  <h4>Domain 6: Services and Networking</h4>
  <p>The domain of services and networking in the CKAD certification exam is highly crucial and accounts for 13% of total questions in the exam. The subtopics covered in this domain are as follows,</p>
  <ul>
    <li>Understanding Services</li>
    <li>Demonstration of basic understanding regarding NetworkPolicies</li>
  </ul>
  <h4>Domain 7: State Persistence</h4>
  <p>The final domain of the CKAD certification exam deals with State Persistence. This domain accounts for 8% of the total questions in the exam. The subtopics covered in this domain include,</p>
  <ul>
    <li>Understanding PeristentVolumeClaims for storage</li>
  </ul>
  <p>After observing all the domains and subtopics in each domain, candidates could gain confidence for approaching their exam preparations. As you all know, knowledge and surety about the path ahead is the foremost determinant of an individual’s confidence to take the first step. So, now it is time for candidates to get ready and learn about the best practices to prepare for CKAD certification exam.</p>
  <h2>Preparation Guide for Certified Kubernetes Application Developer Certification Exam</h2>
  <p>Now, many readers might be expecting some sort of magical guide for certified Kubernetes application developer exam preparation. However, candidates should know that the trick to successful preparation is accountability and dedication. If you follow the best practices recommended by subject matter experts and the tips by qualified CKAD professionals, you have better chances of qualifying the exam in the first attempt.</p>
  <ul>
    <li>Visit the Official Certification Page</li>
  </ul>
  <p>One of the first things that every candidate should do is to visit the official CNCF website. The CNCF website is the ideal source for candidates to verify all information about the CKAD exam. No matter how authentic different sources of information about the CKAD exam may be, the official certification page is always the first source of information.</p>
  <p>Most important of all, the official certification page provides not only information about the CKAD exam but also the help full support. Candidates can find an overview of the curriculum and a candidate handbook on the official certification page. Furthermore, candidates can also find the answer to some frequently asked questions about the CKAD exam before starting their exam preparations.</p>
  <ul>
    <li>Start With the Kubernetes Basics</li>
  </ul>
  <p>The next important pointer for candidates to prepare for the CKAD certification exam is to learn about the basics of Kubernetes. One of the reliable and highly recommended sources for learning the basics of Kubernetes is “Kubernetes for Developers” recommended by The Linux Foundation.</p>
  <p>The official training course is ideal for preparing the foundation to tackle performance-based certified Kubernetes application developer questions. The basics of Kubernetes primarily deal with the architecture of Kubernetes and its functionalities such as building, designing, security, and deployment configuration. <a href="http://kubernetes%20certification%20training" target="_blank"><strong>kubernetes certification training</strong></a> helps you to learn more effectively.</p>
  <ul>
    <li>Create a Kubernetes Cluster</li>
  </ul>
  <p>After completing their training in the basics of Kubernetes, candidates should focus on the creation of the Kubernetes cluster. Even though it is not mandatory for certified Kubernetes application developer exam preparation, it helps in achieving hands-on knowledge about the working of Kubernetes.</p>
  <p>Candidates should try to create a cluster on their own different times to gain a comprehensive picture of Kubernetes functionality. In addition, candidates can also understand the ways in which different components of Kubernetes work with each other.</p>
  <ul>
    <li>Practice Till You Achieve Perfection</li>
  </ul>
  <p>The importance of practice for developing fluency for certified Kubernetes application developer questions is undoubted. After learning the basic concepts of Kubernetes and creation of a Kubernetes cluster, candidates have to utilize CKAD sample exercises.</p>
  <p>GitHub is an exceptional source to avail of different sample exercises for the Kubernetes Application Developer certification exam. Candidates should use sample exercises for developing their familiarity with the kubectl command line. Candidates will have limited time during the exam to read the document for using kubectl command line. So, if you are already familiar with the command line, then you can have better chances of qualifying the exam.</p>
  <p>If candidates forget anything related to the command line, they can use support through commands such as kubectl –h and kubectl run –h for finding resources. Another important concern for candidates for their practice during preparation for the CKAD exam is the use of kubectl command. Candidates can utilize kubectl for the creation of resources such as secret, deployment, configmap, cronjobs, service, and others.</p>
  <p>As a result, candidates don’t have to use manifest files for creating resources during the exam, thereby saving a considerable amount of time. For cases where candidates have to edit the manifest, they can use ‘dry-run’ and ‘-o yaml’ for saving the yaml file and then edit the manifest files.</p>
  <h3>Salary of Certified Kubernetes Application Developer</h3>
  <p>The most important factor that can drive the motivation of a candidate to pursue their preparation for the CKAD certification exam with dedication is the certified Kubernetes application developer salary. The average estimate for a Kubernetes related job is $144,648. Therefore, the expected certified Kubernetes application developer salary would range between $78,000 and $215,000.</p>
  <h3>Ready to Prepare For Certified Kubernetes Application Developer Exam?</h3>
  <p>With all the information presented here, you can start your certified Kubernetes application developer exam preparation right now! However, candidates should also note the important pointers to follow during the exam. Candidates should ensure a stable internet connection to avoid disruptions in their exam due to lagging. <strong><a href="https://onlineitguru.com/kubernetes-training.html" target="_blank">Kubernetes online course</a></strong> from industrial experts.</p>
  <p>You should refer to the certified Kubernetes administrator preparation guide also for finding official resources to master the kubectl command-line and ensure that you can configure kubectl auto-completion. Candidates should also know how to use the notepad in the browser terminal during the exam. The notepad can help you in saving the numbers of questions you skip during the exam alongside their percentages.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/DiVHXkNsR</guid><link>https://teletype.in/@snehacynix/DiVHXkNsR?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/DiVHXkNsR?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>How to Establish an Amazon Redshift Connection in DataStage</title><pubDate>Thu, 11 Jun 2020 12:40:00 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/8a/71/8a710393-d283-47c6-bfc6-104b9975afad.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/72/0b/720b6226-6465-416a-878c-06ad36f68496.png"></img>Amazon Redshift is a data warehouse, which allows us to connect through standard SQL based clients and business intelligence tools effectively. It delivers fast query performance by using row-wise data storage by executing the queries parallel in a cluster on multiple nodes.]]></description><content:encoded><![CDATA[
  <p>Amazon Redshift is a data warehouse, which allows us to connect through standard SQL based clients and business intelligence tools effectively. It delivers fast query performance by using row-wise data storage by executing the queries parallel in a cluster on multiple nodes.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/72/0b/720b6226-6465-416a-878c-06ad36f68496.png" width="602" />
  </figure>
  <p><strong>Pre-requisites:</strong></p>
  <ul>
    <li>IBM InfoSphere DataStage Quality Stage Designer — v 9.1.0</li>
    <li>Create an account in AWS and configure Redshift DB, refer to this link to configure</li>
    <li>Download AWS Redshift DB Driver</li>
  </ul>
  <p>If you want to Gain In-depth Knowledge on DataStage, please go through this link <a href="https://onlineitguru.com/datastage-online-training-placement.html" target="_blank"><strong>DataStage Training</strong></a></p>
  <p><strong>Step-by-Step process:</strong></p>
  <p><strong>Step 1:</strong> To connect AWS Redshift Database in Datastage, use the JDBC Connector which is available under the Database section in the palette.</p>
  <ol>
    <li>Create a new file and name it as <strong>config</strong> file under <strong>$DSHOME</strong> (/opt/IBM/InformationServer/Server/DSEngine) path.</li>
  </ol>
  <p><strong><em>Note:</em></strong> If we had already connected any database using JDBC connector, then the file would be existing already. We need to edit the existing file in this case.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/e7/dd/e7dd41f9-7877-43a1-a607-5a5e5453ce7a.png" width="623" />
  </figure>
  <ol>
    <li>Add the downloaded AWS Redshift DB Driver (RedshiftJDBC4–1.2.1.1001.jar) file in any path and mention that file’s path in CLASSPATH parameter. In CLASS_NAMES parameter, we need to mention the class name, which is available in the jar file.</li>
  </ol>
  <p><strong><em>Note:</em></strong> If we want to add more jar files or more class names, the jar file paths or class names should be separated by a semicolon (;)</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/b3/11/b311ca48-051e-40e7-afc3-7979e57d153f.png" width="624" />
  </figure>
  <ol>
    <li>The below screenshot will show that the downloaded jar file is placed in the path mentioned in <strong>isjdbc.config</strong></li>
  </ol>
  <figure class="m_original">
    <img src="https://teletype.in/files/c0/08/c00804d3-23d7-4f9f-8b04-52e39b40df8d.png" width="624" />
  </figure>
  <p><strong>Step 2:</strong> Develop a Datastage job by having a JDBC connector (available under Database section in palette) as the source or Target <a href="https://onlineitguru.com/datastage-online-training-placement.html" target="_blank"><strong>datastage administrator training</strong></a> helps you to learn more techniques and skills more effectively.</p>
  <ol>
    <li>Create a new parallel job with the JDBC Connector as source.</li>
    <li>Open the JDBC connector and fill the JDBC URL in the URL section and fill the User name/password &amp; Table name like below,</li>
  </ol>
  <figure class="m_original">
    <img src="https://teletype.in/files/24/31/2431345d-3331-4e35-a19d-d6d734e214f6.png" width="536" />
  </figure>
  <p>3. The JDBC URL will be available in the Cluster Database Properties in the AWS console. The JDBC URL will be like below,</p>
  <ul>
    <li>jdbc:redshift://&lt;ServerName&gt;:&lt;Port&gt;/&lt;Database Name&gt;</li>
    <li>Server Name: It will differ for every individual connection</li>
    <li>Port number: 5439 (Common for all Redshift DB)</li>
    <li>Database Name: redshiftdemo (I have created during the configuration)</li>
  </ul>
  <figure class="m_original">
    <img src="https://teletype.in/files/30/f6/30f6e6a2-2b28-4938-a13b-97bdedab3f55.png" width="276" />
  </figure>
  <ol>
    <li>Once the parameters are filled in the JDBC Connector, test the connection like below</li>
  </ol>
  <p><strong><em>Note:</em></strong> Check whether the AWS Redshift is running or not, before testing the connection</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/f4/88/f48891be-da06-4a19-8bb3-7c9cdfb73b0e.png" width="510" />
  </figure>
  <ol>
    <li>We can view the data in datastage, once we establish our connection successfully</li>
  </ol>
  <figure class="m_original">
    <img src="https://teletype.in/files/93/dc/93dc367c-aa84-471d-b0fa-4933e203e30b.png" width="510" />
  </figure>
  <p><strong>Step 3:</strong> Now the connection is established successfully and we can develop our job with the stages needed. Below, I’ve created a simple mapping with copy stage and sequential file as our target. The below job is successfully completed and 8 records from the source are exported and the same number of records have been loaded into the sequential file</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/3f/90/3f90deb0-cbd8-45e0-9de3-04b17522beb1.png" width="465" />
  </figure>
  <p><strong>Step 4:</strong> Check the target to see if the records are loadeded successfuly or not like below,</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/63/bf/63bfe73b-ca6f-4dd4-a93b-ef3f04e5bead.png" width="472" />
  </figure>
  <p><strong>Troubleshooting:</strong></p>
  <ul>
    <li><em>java.lang.UnsupportedClassVersionError: bad major version at offset=6 –</em></li>
  </ul>
  <figure class="m_original">
    <img src="https://teletype.in/files/ce/7e/ce7e0056-8c6f-4172-9cb2-3b78c42ac219.png" width="451" />
  </figure>
  <p><strong>Cause</strong><em>:</em> The version which is used to compile the jar file is different from jre which is available in your Unix machine</p>
  <p><strong>Solution</strong>: Use lower version of the jar file or compile with same java runtime environment</p>
  <ul>
    <li><em>The driver configuration file isjdbc.config could not be found –</em></li>
  </ul>
  <figure class="m_original">
    <img src="https://teletype.in/files/79/be/79bee57e-594d-448e-ba2a-9b52e4de32fd.png" width="412" />
  </figure>
  <p><strong>Cause</strong>: This will happen if we don’t place the isjdbc.config file in $DSHOME path</p>
  <p><strong>Solution</strong>: We have to place the isjdbc.config file in $DSHOME path (/opt/IBM/InformationServer/Server/DSEngine). <a href="https://onlineitguru.com/datastage-online-training-placement.html" target="_blank"><strong>Datastage administrator training</strong></a> from industrial experts.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/FL_00eko2</guid><link>https://teletype.in/@snehacynix/FL_00eko2?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/FL_00eko2?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>Kubernetes vs. Mesos – an Architect’s Perspective</title><pubDate>Wed, 10 Jun 2020 07:23:46 GMT</pubDate><description><![CDATA[Linux containers are now in common use. But when they were first introduced in 2008, virtual machines, or VMs, were the state-of-the-art option for cloud providers and internal data centers looking to optimize a data center’s physical resources. ]]></description><content:encoded><![CDATA[
  <p>Linux containers are now in common use. But when they were first introduced in 2008, virtual machines, or VMs, were the state-of-the-art option for cloud providers and internal data centers looking to optimize a data center’s physical resources. </p>
  <p>This arrangement worked well enough, except for one major flaw: Each VM required both a complete operating system and emulated instructions to reach the physical CPU. While some technologies like <em>Intel VT-x </em>and <em>AMD-V</em> promised to resolve this issue, they were not as efficient as bare metal. <a href="https://onlineitguru.com/kubernetes-training.html" target="_blank"><strong>Kubernetes oline course</strong></a> from industrial experts.</p>
  <p>Containers, on the other hand, take a different approach to maximizing resource usage: They use a common kernel for all applications, but enable it to freely choose all the operational resources it needs. All containers have exclusive access to resources (like CPU, memory, disk, and network) and each can be prioritized by a manager. In other words, containers can run on bare metal<em>,</em> sharing resources, but are unable to access other containers’ resources. While in some ways this resembles the sandboxing implementation in modern mobile operating systems, it runs at a low level and does not require changes to an application already running in a Linux distribution.</p>
  <p>Containers have one major flaw—they are great for green field projects, but how can they be implemented for older tools that an entire company may be developing or using? The fact is that some technologies and products are simply too difficult to deploy as containers.</p>
  <p>Kubernetes (K8S) is an open-source container orchestration system originally created by Google that handles the entire production lifecycle, from on-the-fly deployment, to scaling up and down, to health checks with high availability. It’s also very opinionated.</p>
  <h2>The Problem and the Different Approaches</h2>
  <p>We should emphasize that we’re comparing tools that have different approaches to the same problem. K8S is a container orchestrator or, in other words, a tool that manages containers and their peculiarities such as availability, scaling, and so on. Apache Mesos, on the other hand, is more like a “cloud operational system” that tries to manage all the resources of a cloud (public or private), meaning it has a far broader range of responsibilities.</p>
  <p>While Kubernetes works on the concept that every computational resource must be enveloped within a container, Mesos understands that the world is not black and white, and that we should use the best tools for each particular situation.</p>
  <p><strong>Kubernetes architecture</strong></p>
  <p>Kubernetes has one or more <em>kubernetes master</em> instances and one or more <em>kubernetes nodes</em>. The master instance is used to manage the cluster and the available nodes. It also manages deployment settings (number of instances, what to do with a version upgrade, high availability, etc.) and service discovery. Every computing resource is enveloped by a container and cloud resources, such as network, storage, and everything else, should be provided by plugins to comply with the K8S philosophy.</p>
  <p><strong>Mesos architecture</strong></p>
  <p>Mesos’ architecture is similar, but it has evolved a whole new layer. Like K8S, Mesos has a master and nodes (<em>agents</em>), which provide analogous functionality. However, it adds a <em>scheduler </em>layer that doesn’t exist in K8S. A scheduler is an implementation of a technology that can use the Mesos infrastructure to run what it was built for. In the picture above, we’ve deployed Hadoop (big data) and MPI (messaging) schedulers; however, there are dozens of schedules available, from containers (Marathon) to continuous integration (Jenkins). Of the long list of schedulers built on Mesos, one worth highlighting is Marathon, which is a container orchestration scheduler, similar to K8S. </p>
  <p>As a generalization, Kubernetes is a more opinionated tool that can be very useful—if you embrace its founders’ vision. Mesos, on the other hand, is more flexible, and even enables you to create your own scheduler. Containers are generally not a good fit for legacy or monolithic systems, but these can be accommodated by Mesos through the creation of suitable schedulers. Unfortunately, however, Mesos adds a new layer of complexity that many developers are not willing to tolerate.</p>
  <p>In this case, our deployment, named <em>wordpress-deployment</em>, will have three instances and will run the image <em>wordpress/wordpress:4.5-apache</em>. <a href="https://onlineitguru.com/kubernetes-training.html" target="_blank"><strong>online kubernetes course</strong></a> for more skills and techniques.</p>
  <pre>&gt; kubectl set image deployment/wordpress-deployment wordpress=wordpress:4.8-apache</pre>
  <p>This will force the deployment to stop the old images and start the new ones; however, the old ones will be stopped and removed only when the new images are available. If you need to rollback a version, you simply type the following command and the previous state will be restored:</p>
  <pre>&gt; kubectl rollout undo deployment/wordpress-deployment</pre>
  <p>All deployment updates are stored, so if needed, you can rollback through all revisions.</p>
  <p>Mesos (through Marathon) also has a declarative approach for updates. To paraphrase from the documentation, each application has a unique id, and with every new execution, Marathon makes on-the-fly changes to the application’s properties. Some properties can be used for a fine-grained deployment. Besides those it has already altered, Marathon adds the property <em>minimumHealthCapacity </em>to aid deployment. This represents the proportion of old instances to new ones. For example, if <em>minimumHealthCapacity </em>= 0, all old instances can be killed before a new version is deployed. On the other hand, if <em>minimumHealthCapacity </em>= 1, all instances of the new version will be run side by side with the old version before the latter is removed.</p>
  <h2>Summary</h2>
  <p>Kubernetes and Mesos employ different tactics to handle the same problem. Mesos is more ambitious, as Kubernetes equates to just a single node of Mesos’ entire solution. However, the complexity of Mesos presents a higher barrier to entry for a new user. This is reflected in its limited adoption by the major cloud providers as an on-premises solution when compared with Kubernetes’ rapid uptake.</p>
  <p>From an architect’s perspective, both solutions are equivalent in terms of features, with each having its own particular strengths. The nice architectural design of Mesos provides some good options for handling legacy systems and more specific technologies like distributed processing with Hadoop. <a href="https://onlineitguru.com/kubernetes-training.html" target="_blank"><strong>Kubernetes online training</strong></a> helps you to learn more effectively.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@snehacynix/iFDFENRaG</guid><link>https://teletype.in/@snehacynix/iFDFENRaG?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix</link><comments>https://teletype.in/@snehacynix/iFDFENRaG?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=snehacynix#comments</comments><dc:creator>snehacynix</dc:creator><title>SQL is 46 years old - here’s 8 reasons we still use it today</title><pubDate>Thu, 04 Jun 2020 07:35:25 GMT</pubDate><description><![CDATA[The whole survey is a fascinating foray into the brains of developers and the global software industry.]]></description><content:encoded><![CDATA[
  <p>The whole survey is a fascinating foray into the brains of developers and the global software industry.</p>
  <p>But one thing struck us: SQL is the second-most common programming language, used by 50% of all developers (Web, Desktop, Sysadmin/DevOps, Data Scientist/Engineer) and beaten only by JavaScript - a language half the age of SQL.</p>
  <p>That’s quite an achievement for a 46 years of old, especially at the exponentrial rate so common in software and technology. <a href="https://onlineitguru.com/sql-server-dba-training.html" target="_blank"><strong>sql server dba online training</strong></a> helps you to learn more effectively.</p>
  <p>And yes, it’s true C and C++ are both nearly as old or older than SQL, but even combined they still aren’t as prevalent as SQL is today.</p>
  <h2>So why do we still use SQL?i</h2>
  <p>The simple fact that both arrived early in the life of computing, and that for 90% of the time they just work, means databases have become a ‘solved problem’ you no longer need to think about.</p>
  <p>It’s like how MailChimp has become synonymous with sending email newsletters. If you want to work with data you use RDBMS and SQL. In fact, there usually needs to be a good reason <em>not</em> to use them. Just like there needs to be a good reason not to use MailChimp for sending emails, or Stripe for taking card payments.</p>
  <p>But people do use other other email automation software and payment solutions, just like people use NoSQL databases. Yet even with other database technology available, albeit less mature technology, SQL still reigns and reigns well.</p>
  <p>So, finally, here are 8 reasons we still use SQL 43 years after it was first cooked up.</p>
  <p>​</p>
  <h2>1. Simple mathematics</h2>
  <p>SQL was designed specifically for data so–surprise, surprise–it excels at accessing and organizing data.</p>
  <p>Reason one: SQL is damn good at what it does.</p>
  <p>​</p>
  <h2>2. Battle-tested</h2>
  <p>RDBMS have been around for a while so they’ve been used in many, many different scenarios. From pre-web offline databases to heavily-modified SQL databases playing a central role in global apps like Facebook - RDBMS and SQL are battle-tested and have proven to be reliable after countless millions of hours running in production.</p>
  <p>There’s a lot to be said for software that <em>just works</em>, especially when you’re dealing with data and databases where losses, corruption, or failure are catastrophic. Edge cases often benefit from mature solutions with numerous plans patterns for backups, change management, and operational rigor.</p>
  <p>Hence a SQL database is nearly always the best choice.</p>
  <p>​</p>
  <h2>3. Knowledge and community</h2>
  <p>When things are around for a while a general body of knowledge is built up around them. SQL is no different. Over the years a vast array of shared SQL knowledge in the form of documentation, thriving communities, and plenty of technical talent has developed.</p>
  <p>Such a vast body of information with an active community around it does a lot to keep a technology around. Because the community is so active and the documentation so extensive, people and businesses gravitate towards the technology. Because people gravitate towards the technology the community grows and the level of knowledge deepens and is shared with new adopters.</p>
  <p>Over the years, this is what’s happened with SQL.</p>
  <p>​</p>
  <h2>4. Simplicity</h2>
  <p>As far as languages go, SQL is easy to learn. It can take just a few days to learn the limited number of functions one can use to run queries and return data. Simple.</p>
  <p>Even roles that are traditionally non-technical such as marketing, C-level executives, and non-technical product managers are known to learn basic SQL to support their roles.</p>
  <p>Deeply understanding the relational database systems that SQL runs on is another thing. But for a vast majority of simple data queries, SQL is great.</p>
  <p>​</p>
  <h2>5. Ubiquity</h2>
  <p>With half of developers using SQL and RDBMS it’s safe to say the language and technology is ubiquitous. This is no bad thing. As mentioned above, knowledge and community thrives in this situation. And due to its simplicity, SQL is almost common knowledge among developers and those they work with. <a href="https://onlineitguru.com/sql-server-dba-training.html" target="_blank"><strong>sql dba course</strong></a> along with real time projects.</p>
  <p>This means skill sets easily transfer between companies and industries, which means talent is readily available, which in turn fuels knowledge creation and community growth.</p>
  <p>The ubiquitous nature of SQL databases has formed a beneficial circular model for growth and its fantastic.</p>
  <p>​</p>
  <h2>6. Open Source and interoperability</h2>
  <p>Generally, SQL isn’t completely interoperable. Vendors aren’t known for following the same standards, largely due to differing syntax. However, SQL syntax varies only slightly between vendors so it’s still possible to reuse SQL with some modification. But this isn’t ideal and some vendors would rather their syntax wasn’t reusable.</p>
  <h2>7. Why code when you can use SQL?</h2>
  <p>SQL is <em>made</em> for joining data, filtering data, selecting columns and so on. Doing these things in your own custom code instead of relying on SQL and the database software leads to writing unnecessary lines of code with no added value.</p>
  <p>Here’s an example. Let’s say we need data to create a “California revenue Q3” report.</p>
  <p>You can create this report by writing one line of SQL that magically:</p>
  <p>Fetches users from the California table Sorts the data Totals the data Orders the data so you can show one column that says “California revenue Q3 2017”</p>
  <p>This is what the one line of SQL would like look:</p>
  <p><code>SELECT SUM(Value_USD) AS California_Revenue_Q3 FROM Transactions WHERE Location = &#x27;California&#x27; AND DATEPART(q, Date) = 3 AND YEAR(Date) = 2017;</code></p>
  <p>And if we wanted to break it down by location the SQL would be as follows:</p>
  <p><code>SELECT Location, SUM(Value_USD) AS Revenue_Q3 FROM Transactions WHERE DATEPART(q, Date) = 3 AND YEAR(Date) = 2017 GROUP BY Location ORDER BY Location;</code></p>
  <p>And if we wanted the top five areas by revenue:</p>
  <p><code>SELECT TOP 5 Location, SUM(Value_USD) AS Revenue_Q3 FROM Transactions WHERE DATEPART(q, Date) = 3 AND YEAR(Date) = 2017 GROUP BY Location ORDER BY SUM(Value_USD) DESC;</code></p>
  <p>To run these queries in other languages would be complicated, time-consuming, and take far too much code. SQL was purposely designed to slice data and it does it well. Not to mention that it’s more efficient to bring the computation to the data, rather than bringing the data to the computation.</p>
  <p>​</p>
  <h2>8. SQL/RDBMS and NoSQL/DBMS databases play different roles</h2>
  <p>Databases are tools. They’re not all hammers. You have wrenches, screwdrivers, saws, spanners, etc. Each does a different job and solves a different problem. There are SQL, key value, time series, blockchains, embedded, and more. Each type of database is good at something and bad at others.</p>
  <p>Relational databases are fantastic when you need to express relationships in a system when you can’t foresee all possible permutations of data combination, aggregation, or usage. And, honestly, most systems fall into this category. Plus the SQL language itself offers a user-friendly way to organize data in the way you need it.</p>
  <p>SQL/RDBMS are just one of many tools for a specific job - and just so happen to be a perfectly feasible tool for many jobs. And when consistent data integrity is essential (for example, in finance), they are the best.</p>
  <p>SQL databases have their drawbacks and aren’t the best choice for certain jobs. But for a vast majority of cases they simply blow every other NoSQL solution out of the water. <strong><a href="https://onlineitguru.com/sql-server-dba-training.html" target="_blank">sql dba training</a> </strong>helps you to learn more skills and techniques.</p>
  <p>And if you’re going to get riled up about scale, realistically only a tiny percentage will ever need to worry about scaling a RDBMS - <strong>you’re not Facebook or Google</strong>. You can still have millions of users with a SQL database and have no issues.</p>

]]></content:encoded></item></channel></rss>