<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xmlns:tt="http://teletype.in/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>@mukunda</title><generator>teletype.in</generator><description><![CDATA[@mukunda]]></description><link>https://teletype.in/@mukunda?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda</link><atom:link rel="self" type="application/rss+xml" href="https://teletype.in/rss/mukunda?offset=0"></atom:link><atom:link rel="next" type="application/rss+xml" href="https://teletype.in/rss/mukunda?offset=10"></atom:link><atom:link rel="search" type="application/opensearchdescription+xml" title="Teletype" href="https://teletype.in/opensearch.xml"></atom:link><pubDate>Tue, 28 Apr 2026 17:13:14 GMT</pubDate><lastBuildDate>Tue, 28 Apr 2026 17:13:14 GMT</lastBuildDate><item><guid isPermaLink="true">https://teletype.in/@mukunda/jnJ9TQ1N9</guid><link>https://teletype.in/@mukunda/jnJ9TQ1N9?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda</link><comments>https://teletype.in/@mukunda/jnJ9TQ1N9?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda#comments</comments><dc:creator>mukunda</dc:creator><title>Exploring Micro-Frameworks: Spring Boot</title><pubDate>Fri, 26 Jun 2020 10:29:33 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/9c/ac/9cacf68c-8cf2-4725-9846-73a732570da0.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/27/0e/270eca04-82ae-47a2-8d11-4914805c4d35.png"></img>Spring Boot is a brand new framework from the team at Pivotal, designed to simplify the bootstrapping and development of a new Spring application. The framework takes an opinionated approach to configuration, freeing developers from the need to define boilerplate configuration. In that, Boot aims to be a front-runner in the ever-expanding rapid application development space.]]></description><content:encoded><![CDATA[
  <p>Spring Boot is a brand new framework from the team at Pivotal, designed to simplify the bootstrapping and development of a new Spring application. The framework takes an opinionated approach to configuration, freeing developers from the need to define boilerplate configuration. In that, Boot aims to be a front-runner in the ever-expanding rapid application development space.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/27/0e/270eca04-82ae-47a2-8d11-4914805c4d35.png" width="772" />
  </figure>
  <p>The Spring IO platform has been criticized over the years for having bulky XML configuration with complex dependency management. During last year’s Spring One 2GX conference, Pivotal CTO, Adrian Colyer acknowledged those criticisms, and made special note that a goal of the platform going forward is to embrace an XML-free development experience. Boot takes that mission statement to the extreme, not only freeing developers from the need for XML, but also, in some scenarios, releasing them from the tedium of writing import statements. In the days following its public beta release, Boot gained some viral popularity by demonstrating the framework’s simplicity with a runnable web application that fit in under 140-characters, delivered in a tweet <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank">Spring boot course</a></strong>.</p>
  <p>Installing Boot</p>
  <p>At its most fundamental level, Spring Boot is little more than a set of libraries that can be leveraged by any project’s build system. As a convenience, the framework also offers a command-line interface, which can be used to run and test Boot applications. The framework distribution, including the integrated CLI, can be manually downloaded and installed from the Spring repository. A more convenient approach is to use the Groovy enVironment Manager (GVM), which will handle the installation and management of Boot versions. Boot and its CLI can be installed by GVM with the command line, gvm install springboot. Formulas are available for installing Boot on OS X through the Homebrew package manager. To do so, first tap the Pivotal repository with brew tap pivotal/tap, followed by the brew install springboot command <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank">Spring boot online course</a></strong>.</p>
  <p>Projects that are to be packaged and distributed will need to rely on build systems like Maven or Gradle. To simplify the dependency graph, Boot’s functionality is modularized, and groups of dependencies can be brought in to a project by importing Boot’s so-called &quot;starter&quot; modules. To easily manage dependency versions and to make use of default configuration, the framework exposes a parent POM, which can be inherited by projects. An example POM for a Spring Boot project is defined in Listing 1.</p>
  <h2><br />Developing a Spring Boot Application</h2>
  <p>The most popular example of a Spring Boot application is one that was delivered via Twitter shortly following the public announcement of the framework. As demonstrated in its entirety in Listing 1.2, a very simple Groovy file can be crafted into a powerful Spring-backed web application.</p>
  <p>This application can be run from the Spring Boot CLI, by executing the spring run App.groovy command. Boot analyzes the file and — through various identifiers known as &quot;compiler auto-configuration&quot; — determines that it is intended to be a web application. It then, in turn, bootstraps the Spring Application context inside of an embedded Tomcat container on the default port of 8080. Opening a browser and navigating to the provided URL will land you on a page with a simple text response, &quot;hello&quot;. This process of providing a default application context and embedded container allows developers to focus on the process of developing application and business logic, and frees them from the tedium of otherwise boiler-plate configuration <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank">Spring boot online training india</a></strong>.</p>
  <p>Boot’s ability to ascertain the desired functionality of a class is what makes it such a powerful tool for rapid application development. When applications are executed from the Boot CLI, they are built using the internal Groovy compiler, which allows the ability to programmatically inspect and modify a class while its bytecode is being generated. In this way, developers who use the CLI are not only freed from the need to define default configuration, but, to an extent, they also do not need to define certain import statements that can otherwise be recognized and automatically added during the compilation process. Additionally, when applications are run from the CLI, Groovy’s built-in dependency manager, &quot;Grape&quot;, is used to resolve classpath dependencies that are needed to bootstrap the compilation and runtime environments, as determined by Boot’s compiler auto-configuration mechanisms. This idiom not only makes the framework more user-friendly, but also allows different versions of Spring Boot to be coupled with specific versions of libraries from the Spring IO platform, which in turn means that developers do not need to be concerned with managing a complex dependency graph and versioning structure. Additionally, it opens the door for rapid prototyping and quick generation of proof-of-concept project code.</p>
  <p>For projects that are not built with the CLI, Boot provides a host of &quot;starter&quot; modules, which define a set of dependencies that can be brought into a build system in order to resolve the specific libraries needed from the framework and its parent platform. As an example of this, the spring-boot-starter-actuator dependency pulls in a set of base Spring projects to get an application quickly configured and up-and-running. The emphasis of this dependency is on developing web applications, and specifically RESTful web services. When included in conjunction with the spring-boot-starter-web dependency, it will provide auto-configuration to bootstrap an embedded Tomcat container, and will map endpoints useful to micro-service applications, like server information, application metrics, and environment details. Additionally, when the spring-boot-starter-security module is brought in, the actuator will auto-configure Spring Security to provide the application with basic authentication and other advanced security features. For any application structure, it will also include an internal auditing framework that can be used for reporting purposes or application-specific needs, like developing an authenitcation-failure lock-out policy.</p>
  <p>To demonstrate quickly getting a Spring web application up-and-running from within a Java Maven project <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank">Spring boot online training</a></strong>.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@mukunda/E3YxZGYjJ</guid><link>https://teletype.in/@mukunda/E3YxZGYjJ?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda</link><comments>https://teletype.in/@mukunda/E3YxZGYjJ?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda#comments</comments><dc:creator>mukunda</dc:creator><title>Why Model Explainability is The Next Data Science Super power</title><pubDate>Thu, 25 Jun 2020 09:46:17 GMT</pubDate><description><![CDATA[<img src="https://teletype.in/files/62/e5/62e5f69d-7445-4c5b-969f-f88175aa9f47.png"></img>Some people think machine learning models are black boxes, useful for making predictions but otherwise unintelligible; but the best data scientists know techniques to extract real-world insights from any model. For any given model, these data scientists can easily answer questions like What features in the data did the model think are most important? For any single prediction from a model, how did each feature in the data affect that particular prediction What interactions between features have the biggest effects on a model’s predictions Answering these questions is more broadly useful than many people realize. This inspired me to create Kaggle’s model explainability micro-course. Whether you learn the techniques from Kaggle or from...]]></description><content:encoded><![CDATA[
  <p>Some people think machine learning models are black boxes, useful for making predictions but otherwise unintelligible; but the best data scientists know techniques to extract real-world insights from any model. For any given model, these data scientists can easily answer questions like<br />What features in the data did the model think are most important?<br />For any single prediction from a model, how did each feature in the data affect that particular prediction<br />What interactions between features have the biggest effects on a model’s predictions<br />Answering these questions is more broadly useful than many people realize. This inspired me to create Kaggle’s model explainability micro-course. Whether you learn the techniques from Kaggle or from a comprehensive resource like Elements of Statistical Learning, these techniques will totally change how you build, validate and deploy machine learning models.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/62/e5/62e5f69d-7445-4c5b-969f-f88175aa9f47.png" width="601" />
  </figure>
  <p>Why Are These Insights Valuable?<br />The five most important applications of model insights are<br />Debugging<br />Informing feature engineering<br />Directing future data collection<br />Informing human decision-making<br />Building Trust<br />Debugging</p>
  <p><br />The world has a lot of unreliable, disorganized and generally dirty data. You add a potential source of errors as you write pre processing code. Add in the potential for target leakage and it is the norm rather than the exception to have errors at some point in a real data science projects <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science online course</a></strong>. </p>
  <p><br />Given the frequency and potentially disastrous consequences of bugs, debugging is one of the most valuable skills in data science. Understanding the patterns a model is finding will help you identify when those are at odds with your knowledge of the real world, and this is typically the first step in tracking down bugs.<br />Informing Feature Engineering</p>
  <p>Feature engineering is usually the most effective way to improve model accuracy. Feature engineering usually involves repeatedly creating new features using transformations of your raw data or features you have previously created.<br />Sometimes you can go through this process using nothing but intuition about the underlying topic. But you’ll need more direction when you have 100s of raw features or when you lack background knowledge about the topic you are working on.</p>
  <p><br />A Kaggle competition to predict loan defaults gives an extreme example. This competition had 100s of raw features. For privacy reasons, the features had names like f1, f2, f3 rather than common English names. This simulated a scenario where you have little intuition about the raw data.Visit for more info <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science online training hyderabad</a></strong>.</p>
  <p><br />One competitor found that the difference between two of the features, specifically f527 — f528, created a very powerful new feature. Models including that difference as a feature were far better than models without it. But how might you think of creating this variable when you start with hundreds of variables?<br />The techniques you’ll learn in this course would make it transparent that f527 and f528 are important features, and that their role is tightly entangled. This will direct you to consider transformations of these two variables, and likely find the “golden feature” of f527 — f528.As an increasing number of datasets start with 100s or 1000s of raw features, this approach is becoming increasingly important.For deep understanding <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science course</a></strong>. </p>
  <p><br /><strong>Directing Future Data Collection</strong><br />You have no control over datasets you download online. But many businesses and organizations using data science have opportunities to expand what types of data they collect. Collecting new types of data can be expensive or inconvenient, so they only want to do this if they know it will be worthwhile. Model-based insights give you a good understanding of the value of features you currently have, which will help you reason about what new values may be most helpful.<br />Informing Human Decision-Making<br />Some decisions are made automatically by models. Amazon doesn’t have humans (or elves) scurry to decide what to show you whenever you go to their website. But many important decisions are made by humans. For these decisions, insights can be more valuable than predictions.<br />Building Trust<br />Many people won’t assume they can trust your model for important decisions without verifying some basic facts. This is a smart precaution given the frequency of data errors. In practice, showing insights that fit their general understanding of the problem will help build trust, even among people with little deep knowledge of data science <a href="https://onlineitguru.com/data-science-course.html" target="_blank"><strong>Data science online training in india</strong>.</a></p>
  <p></p>
  <p></p>
  <p></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@mukunda/OR806rL_YD</guid><link>https://teletype.in/@mukunda/OR806rL_YD?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda</link><comments>https://teletype.in/@mukunda/OR806rL_YD?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda#comments</comments><dc:creator>mukunda</dc:creator><title>How Is Data Science Changing Web Design?</title><pubDate>Wed, 24 Jun 2020 09:54:48 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/66/f9/66f923c4-e7af-475e-9a50-552e3d59c6a4.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/a1/8f/a18f82b7-c9df-4667-a4cd-a6823ff23597.jpeg"></img>Data science isn’t just changing web design in a minor way: it’s changing every aspect of it from the start of the design process to the end (and even beyond through the update process). Whenever you have the resources and expertise to deploy it, it’s worthwhile, because having cut-and-dry insight into performance is invaluable.]]></description><content:encoded><![CDATA[
  <p>Data science isn’t just changing web design in a minor way: it’s changing every aspect of it from the start of the design process to the end (and even beyond through the update process). Whenever you have the resources and expertise to deploy it, it’s worthwhile, because having cut-and-dry insight into performance is invaluable.</p>
  <figure class="m_custom">
    <img src="https://teletype.in/files/a1/8f/a18f82b7-c9df-4667-a4cd-a6823ff23597.jpeg" width="699.3638170974153" />
  </figure>
  <p>When the internet first began to pick up some steam, it had some of the hallmarks of the Wild West: the rules hadn’t been clearly defined, for instance, and direct competition was fairly minimal because there was so much space to be filled. As a result, there was a lot of freewheeling experimentation going by gut feeling. Just try something, see how it goes.</p>
  <p>As the years have gone by, the online world has gone from an attention-grabbing novelty to a fundamental part of daily life. Along the way, its standards have changed immensely, and there are two reasons for this: technological progress in general, and frequent efforts from online brands to exceed previous levels of service and performance to outperform their rivals.</p>
  <p>Given how competitive the web is now, going by gut feeling won’t get brands very far, so they are essentially required to invest in data science: using the smartest methods available to them to analyze relevant data and reach informed conclusions about what they need to do. In this post, we’re going to consider how data science is changing web design. Let’s begin with <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science online training in hyderabad</a></strong>.<br /></p>
  <h2><strong>IT’S INFORMING EVERY STEP OF THE PROCESS</strong></h2>
  <p><br />The significance of data science is such that it doesn’t simply factor into one element of the web design process (weighing in at the end, perhaps) — instead, it has a key role to play at every point and in every department. The reasoning for this should be fairly obvious. Given that every step in a digital process inevitably produces traceable results (without traceable results, you can’t know how well something is working), the opportunity for data science is always there.If you want know more visit <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science course</a></strong>.</p>
  <p>It may not be used at all opportunities when small businesses run web design projects, but that’s simply because they don’t have the resources to invest so broadly. Look at big companies to see where things are going. Case in point: eCommerce giant Shopify (known for its sell everything everywhere message) has a full Data Science &amp; Engineering team, but it doesn’t serve exclusively as a separate unit.</p>
  <h3><strong>IT’S OFFERING IMPROVED USER EXPERIENCES</strong></h3>
  <p>Putting so much time and effort into data science wouldn’t be worthwhile if it didn’t ultimately get impressive results, so we need to think about the net product of the investment: improved user experiences. Excellent web design needs to allow users to find what they need with minimal effort and optimal convenience. In addition to being easy to operate, it should find other ways to impress the user: the bigger a positive impression it can make, for the better understanding <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science online training india</a></strong>.</p>
  <p>Now think about all the customized and personalized experiences you can now get in the online world. You can visit a familiar retailer and benefit from dynamic recommendations that factor in your previous purchases, deals that suit your interests, and even design elements that reflect your preferences (not common, but not unheard of).</p>
  <p>All of these things are made possible by the data science field: without machine learning to automate them, dynamic recommendations would need to be done manually. You can even think about chat bot-enhanced customer support. Being able to check up on the status of an order by issuing a request through a chat bot window feels like a trivial thing, but it was made possible by advances in natural language processing that couldn’t have been achieved without deploying machine learning to process and parse vast quantities of data.</p>
  <h3>IT’S STARTING TO AUTOMATE UPDATE PATHS</h3>
  <p>Web designs aren’t supposed to be static. Once you deploy them, it won’t be very long before they’re outdated relative to newer designs, so you need to make a commitment to keeping them updated. But what if you didn’t need to slowly work on manual updates and carefully roll them out as appropriate? What if you could automate much of the update process?</p>
  <p>Well, with the right process in place (powered by data science, of course), you could. You could set up a system with various flexible elements and have it reshuffle those elements in response to analytics from testing. Tweak one element and see how the results change: if they get worse, revert the tweak, and if they get better, leave it as it is and tweak something else.</p>
  <p>In the long run, self-optimizing systems are going to become extremely common. Human intelligence can then be put towards more interesting things, such as new projects or updates that go beyond basic comparison testing. The key will be finding a balance between the things at which computers excel (repetitive processes at massive scale) and the things at which human brains excel.For deep concepts<a href="https://onlineitguru.com/data-science-course.html" target="_blank"> <strong>Data science online course .</strong></a><br /></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@mukunda/HBP8gzX_H</guid><link>https://teletype.in/@mukunda/HBP8gzX_H?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda</link><comments>https://teletype.in/@mukunda/HBP8gzX_H?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda#comments</comments><dc:creator>mukunda</dc:creator><title>3 Reasons Counting is the Hardest Thing in Data Science</title><pubDate>Tue, 23 Jun 2020 09:22:15 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/d5/08/d508ac31-e282-4ffc-bcc7-5695a950ec13.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/a6/e8/a6e86a02-423c-4578-bc83-dae573b5998f.jpeg"></img>Counting is hard. You might be surprised to hear me say that, but it's true. As a data scientist, I've done it all - everything from simple regression analysis all the way to coding Hadoop Map Reduce jobs that process hundreds of billions of data points each month. And, with all that experience, I've found that counting often involves far more time and effort.]]></description><content:encoded><![CDATA[
  <p>Counting is hard. You might be surprised to hear me say that, but it&#x27;s true. As a data scientist, I&#x27;ve done it all - everything from simple regression analysis all the way to coding Hadoop Map Reduce jobs that process hundreds of billions of data points each month. And, with all that experience, I&#x27;ve found that counting often involves far more time and effort.</p>
  <figure class="m_custom">
    <img src="https://teletype.in/files/a6/e8/a6e86a02-423c-4578-bc83-dae573b5998f.jpeg" width="687.1129032258063" />
  </figure>
  <h2>1) Counting requires numerous, often arbitrary decisions</h2>
  <p>Questions like How many computer science students were there at UNC Charlotte last year? or How many graduates from North Carolina&#x27;s public universities find employment within one year of graduation? Seems simple, right?</p>
  <p>Unfortunately, answering those questions required defining a whole host of terms. Just for counting students, you have to decide.To get right path visit <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science training</a></strong>.</p>
  <p>Do part-time students count?<br />What about non-degree-seeking students?<br />Undergraduate only maybe?<br />Are we counting total unique individuals enrolled over the course of a year, or something else?<br />School year, fiscal year, or calendar year?<br />How do we count students that enrolled in multiple programs? Is it OK that enrollment for the university is lower than the sum of the enrollments in its constituent programs?<br />Of course, depending on the purpose of the data, there are right answers to many of these questions. For budgetary purposes, it probably makes sense to go with the fiscal year, for example. But somebody has to make those decisions, which means somebody has to take ownership of setting the business rule.</p>
  <h3>2) Counting is easy to understand</h3>
  <p>This one isn&#x27;t necessarily unique to counting, but it does apply to any sort of basic statistical research. The simpler a statistic or a model is to understand, the easier it is for stakeholders to articulate an opinion about.</p>
  <p>Say you go to a PM or a middle manager in your company and tell them &quot;We&#x27;ve just finished work on a machine learning model that can detect 90% of fraudulent orders with very few false positives.&quot; The response you&#x27;re likely to get is something along the lines of &quot;Great work! What would it take to put this into production?&quot; It&#x27;s very unlikely that they&#x27;re going to dicker about the features or hyper-parameters of your model. The relative complexity involved means it&#x27;s effectively a black box.</p>
  <p>Not so with simpler models. Most people have a pretty good intuitive understanding of things like correlation or multiple regression, even if they don&#x27;t know all the details about how they work. And everybody can understand counting. This means that instead of getting a quick &quot;Good Job&quot; in response to your work, you&#x27;re much more likely to get a host of questions about how your research was done.</p>
  <p>Of course, there are upsides to this - all of our work could probably benefit from the added scrutiny of stakeholder review. Nevertheless, it adds a significant amount of relational and political overhead to the actual analytics.For indept understanding <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science online training india</a></strong>.</p>
  <h3>3) Counting is often high-stakes</h3>
  <p>Again, this isn&#x27;t exclusive to counting... but it does greatly magnify the effects of reasons 1 and 2. Counting is very frequently high-stakes for people other than the analyst. Consider:</p>
  <p>How many sales did Jones make last year? Her bonus likely depends on it.<br />How many people live in Austin, TX? Getting this wrong could alter re-districting and change the balance of political power.<br /></p>
  <p>Counting requires making a lot of (often arbitrary) decisions. It&#x27;s simple enough that everybody can articulate an opinion about those decisions. And it&#x27;s often important enough that folks will have a very strong incentive to form and articulate opinions. This is a recipe for fierce political battles involving stakeholders with entrenched, often conflicting, interests.</p>
  <p>In the end, the counting itself may be unbelievably easy... a simple COUNT DISTINCT query with a carefully crafted WHERE clause is a pretty trivial task for any data scientist worth his salt. But making all the decisions necessary to actually start doing the counting is frequently a long, frustrating, relational-not-technical process.If you are willing to expert in data science reach <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science online training</a></strong>.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@mukunda/nLfpbHedB</guid><link>https://teletype.in/@mukunda/nLfpbHedB?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda</link><comments>https://teletype.in/@mukunda/nLfpbHedB?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda#comments</comments><dc:creator>mukunda</dc:creator><title>Guide to Spring Boot REST API Error Handling</title><pubDate>Mon, 22 Jun 2020 10:19:14 GMT</pubDate><description><![CDATA[<img src="https://teletype.in/files/50/30/5030d7a7-a5b3-4c7e-b933-5d19fd20fbcf.jpeg"></img>Handling errors correctly in APIs while providing meaningful error messages is a very desirable feature, as it can help the API client properly respond to issues. The default behavior tends to be returning stack traces that are hard to understand and ultimately useless for the API client. Partitioning the error information into fields also enables the API client to parse it and provide better error messages to the user. In this article, we will cover how to do proper error handling when building a REST API with Spring Boot.]]></description><content:encoded><![CDATA[
  <p>Handling errors correctly in APIs while providing meaningful error messages is a very desirable feature, as it can help the API client properly respond to issues. The default behavior tends to be returning stack traces that are hard to understand and ultimately useless for the API client. Partitioning the error information into fields also enables the API client to parse it and provide better error messages to the user. In this article, we will cover how to do proper error handling when building a REST API with Spring Boot.</p>
  <figure class="m_custom">
    <img src="https://teletype.in/files/50/30/5030d7a7-a5b3-4c7e-b933-5d19fd20fbcf.jpeg" width="687" />
  </figure>
  <p>Building REST APIs with Spring became the standard approach for Java developers during the last couple of years. Using Spring Boot helps substantially, as it removes a lot of boilerplate code and enables auto-configuration of various components. We will assume that you’re familiar with the basics of API development with those technologies before applying the knowledge described here. If you are still unsure about how to develop a basic REST API, then you should start with this article about Spring MVC or another one about building a Spring REST Service.</p>
  <p>Making Error Responses Clearer<br />Throughout this article, we’ll be using the source code hosted on GitHub of an application that implements a REST API for retrieving objects that represent birds. It has the features described in this article and a few more examples of error handling scenarios. To get indept knowledge <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank"><u>Spring boot online training</u></a></strong> .Here’s a summary of endpoints implemented in that application:<br /></p>
  <p>&lt;/tr&gt;<code>GET /birds/{birdId}</code>Gets information about a bird and throws an exception if not found.<code>GET /birds/noexception/{birdId}</code>This call also gets information about a bird, except it doesn’t throw an exception in case that the bird is not found.<code>POST /birds</code>Creates a bird.</p>
  <p>The Spring framework MVC module comes with some great features to help with error handling. But it is left to the developer to use those features to treat the exceptions and return meaningful responses to the API client.If you want learn anything more <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank">Spring boot online training in hyderabad.</a></strong></p>
  <p>Let’s look at an example of the default Spring Boot answer when we issue an HTTP POST to the <code>/birds</code> endpoint with the following JSON object, that has the string “aaa” on the field “mass,” which should be expecting an integer:</p>
  <pre>{
 &quot;scientificName&quot;: &quot;Common blackbird&quot;,
 &quot;specie&quot;: &quot;Turdus merula&quot;,
 &quot;mass&quot;: &quot;aaa&quot;,
 &quot;length&quot;: 4
}
</pre>
  <p>The Spring Boot default answer, without proper error handling:</p>
  <pre>{
 &quot;timestamp&quot;: 1500597044204,
 &quot;status&quot;: 400,
 &quot;error&quot;: &quot;Bad Request&quot;,
 &quot;exception&quot;: &quot;org.springframework.http.converter.HttpMessageNotReadableException&quot;,
 &quot;message&quot;: &quot;JSON parse error: Unrecognized token &#x27;three&#x27;: was expecting (&#x27;true&#x27;, &#x27;false&#x27; or &#x27;null&#x27;); nested exception is com.fasterxml.jackson.core.JsonParseException: Unrecognized token &#x27;aaa&#x27;: was expecting (&#x27;true&#x27;, &#x27;false&#x27; or &#x27;null&#x27;)\n at [Source: java.io.PushbackInputStream@cba7ebc; line: 4, column: 17]&quot;,
 &quot;path&quot;: &quot;/birds&quot;
}
</pre>
  <p>Well… the response message has some good fields, but it is focused too much on what the exception was. By the way, this is the class <code>DefaultErrorAttributes</code> from Spring Boot. The <code>timestamp</code> field is an integer number that doesn’t even carry information of what measurement unit the timestamp is in. The <code>exception</code> field is only interesting to Java developers and the message leaves the API consumer lost in all the implementation details that are irrelevant to them. And what if there were more details that we could extract from the exception that the error originated from? So let’s learn how to treat those exceptions properly and wrap them into a nicer JSON representation to make life easier for our API clients.</p>
  <p>As we’ll be using Java 8 date and time classes, we first need to add a Maven dependency for the Jackson JSR310 converters. They take care of converting Java 8 date and time classes to JSON representation using the <code>@JsonFormat</code> annotation:</p>
  <pre>&lt;dependency&gt;
   &lt;groupId&gt;com.fasterxml.jackson.datatype&lt;/groupId&gt;
   &lt;artifactId&gt;jackson-datatype-jsr310&lt;/artifactId&gt;
&lt;/dependency&gt;
</pre>
  <p>Ok, so let’s define a class for representing API errors. We’ll be creating a class called <code>ApiError</code> that has enough fields to hold relevant information about errors that happen during REST calls.</p>
  <pre>class ApiError {

   private HttpStatus status;
   @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = &quot;dd-MM-yyyy hh:mm:ss&quot;)
   private LocalDateTime timestamp;
   private String message;
   private String debugMessage;
   private List&lt;ApiSubError&gt; subErrors;

   private ApiError() {
       timestamp = LocalDateTime.now();
   }

   ApiError(HttpStatus status) {
       this();
       this.status = status;
   }

   ApiError(HttpStatus status, Throwable ex) {
       this();
       this.status = status;
       this.message = &quot;Unexpected error&quot;;
       this.debugMessage = ex.getLocalizedMessage();
   }

   ApiError(HttpStatus status, String message, Throwable ex) {
       this();
       this.status = status;
       this.message = message;
       this.debugMessage = ex.getLocalizedMessage();
   }
}
</pre>
  <ul>
    <li>The <code>status</code> property holds the operation call status. It will be anything from 4xx to signalize client errors or 5xx to mean server errors. A common scenario is a http code 400 that means a BAD_REQUEST, when the client, for example, sends an improperly formatted field, like an invalid email address.</li>
    <li>The <code>timestamp</code> property holds the date-time instance of when the error happened.</li>
    <li>The <code>message</code> property holds a user-friendly message about the error.</li>
    <li>The <code>debugMessage</code> property holds a system message describing the error in more detail.</li>
    <li>The <code>subErrors</code> property holds an array of sub-errors that happened. This is used for representing multiple errors in a single call. An example would be validation errors in which multiple fields have failed the validation. The <code>ApiSubError</code> class is used to encapsulate those.</li>
  </ul>
  <pre>abstract class ApiSubError {

}

@Data
@EqualsAndHashCode(callSuper = false)
@AllArgsConstructor
class ApiValidationError extends ApiSubError {
   private String object;
   private String field;
   private Object rejectedValue;
   private String message;

   ApiValidationError(String object, String message) {
       this.object = object;
       this.message = message;
   }
}
</pre>
  <p>So then the <code>ApiValidationError</code> is a class that extends <code>ApiSubError</code> and expresses validation problems encountered during the REST call. For more information visit <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank"><u>Spring boot course.</u></a></strong></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@mukunda/KhReTyk1m</guid><link>https://teletype.in/@mukunda/KhReTyk1m?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda</link><comments>https://teletype.in/@mukunda/KhReTyk1m?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda#comments</comments><dc:creator>mukunda</dc:creator><title>Data Science Production Methods</title><pubDate>Sat, 20 Jun 2020 13:49:28 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/53/00/5300dddd-3058-4a43-9d88-9ec9f73b9169.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/be/23/be234534-fa44-4025-8108-d59caf5dfd58.png"></img>Creating a data science project and executing its modules is the primary step in the production environment, which is where every startup or some established companies fail. While implementing a new module of an existing data science project seems to difficult, working on the module due to the discontinuation of complex tools and techniques used in the design environment is even more so.]]></description><content:encoded><![CDATA[
  <p>Creating a data science project and executing its modules is the primary step in the production environment, which is where every startup or some established companies fail. While implementing a new module of an existing data science project seems to difficult, working on the module due to the discontinuation of complex tools and techniques used in the design environment is even more so.</p>
  <figure class="m_custom">
    <img src="https://teletype.in/files/be/23/be234534-fa44-4025-8108-d59caf5dfd58.png" width="687" />
  </figure>
  <p>Key ways  to Building an Optimally Designed Production Pipeline navigate to <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science online training .</a></strong><br /></p>
  <p>Strategic Data Packing<br />Consider any project that you want. It’s is a known fact that there exists no project without data as data is a default. Each database comprises a huge amount of data in distinct formats and a huge amount of code - let’s say n-number of lines of code with different scripting languages enables us to turn raw data into predictions. The packing of data or code typically happens during production.</p>
  <p>A typical release process includes:</p>
  <p>Putting a versioning tool in place in order to control the code versions.<br />Building a packaging script to pack the code in a zip file format.<br />Deploying it to production.<br />Optimization and Retraining Models<br />To get accurate results, teams work in small iterations. These iterations play a vital role in the process of optimization and retraining. It is essential to have a process layed out in several phases, namely: validation, retraining, and the deployment of modules. However, the modules need to be regularly updated to fit into the new behavior and underlying data changes.</p>
  <p>If you need to retrain your models, then it is suggested to gather more knowledge from <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science training</a></strong> make this a distinct step in the production workflow of your data science team. For example, setting up your system to retrieve a predictive model data weekly, give this model a rating based on the performance, and then validate the results returned, while a human operator verifies the results as well.<br /></p>
  <p>Increasing the number of tools leads to a greater number of problems. Maintaining the production as well as design environment with the latest versions along with the packages is recommended. A data science project depends on up to 100 R packages, 40 for Python, and several hundred Java/Scala packages. For acquiring more information source is <strong><a href="https://onlineitguru.com/data-science-course.html" target="_blank">Data science online course</a></strong>.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@mukunda/MNJwOXzSO</guid><link>https://teletype.in/@mukunda/MNJwOXzSO?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda</link><comments>https://teletype.in/@mukunda/MNJwOXzSO?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mukunda#comments</comments><dc:creator>mukunda</dc:creator><title>Spring Boot @ConfigurationProperties</title><pubDate>Sat, 20 Jun 2020 11:57:51 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/08/c9/08c98000-60ff-4c2e-b649-427d084cacae.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/05/82/05825dc4-2faf-48d9-8023-e9649291a5ea.jpeg"></img>Spring Boot provides a very neat way to load properties for an application. Consider a set of properties described using YAML format:]]></description><content:encoded><![CDATA[
  <p>Spring Boot provides a very neat way to load properties for an application. Consider a set of properties described using YAML format:</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/05/82/05825dc4-2faf-48d9-8023-e9649291a5ea.jpeg" width="952" />
  </figure>
  <pre>prefix:</pre>
  <pre>    stringProp1: propValue1</pre>
  <pre>    stringProp2: propValue2</pre>
  <pre>    intProp1: 10</pre>
  <pre>    listProp:</pre>
  <pre>        - listValue1</pre>
  <pre>        - listValue2</pre>
  <pre>    mapProp:</pre>
  <pre>        key1: mapValue1</pre>
  <pre>        key2: mapValue2</pre>
  <p>These entries can also be described in a traditional application.properties file the following way:</p>
  <pre>prefix.stringProp1=propValue1</pre>
  <pre>prefix.stringProp2=propValue2</pre>
  <pre>prefix.intProp1=10</pre>
  <pre>prefix.listProp[0]=listValue1</pre>
  <pre>prefix.listProp[1]=listValue2</pre>
  <pre>prefix.mapProp.key1=mapValue1</pre>
  <pre>prefix.mapProp.key2=mapValue2</pre>
  <p>It has taken me a little while, but I do like the hierarchical look of the properties described in a YAML format. To indept on this visit <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank">Spring boot training</a></strong></p>
  <p>So now, given this property file, a traditional Spring application would have loaded up the properties in the following way:</p>
  <pre>public class SamplePropertyLoadingTest {</pre>
  <pre>    @Value(&quot;${prefix.stringProp1}&quot;)</pre>
  <pre>    private String stringProp1;</pre>
  <p>Note the placeholder for &quot;prefix.stringProp&quot; key.</p>
  <p>This however is not ideal for loading a family of related properties, say in this specific case namespaced by the prefix conveniently named &quot;prefix&quot;.</p>
  <p>The approach Spring boot takes is to define a bean that can hold all the families of related properties this way: And for more clarification <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank">Spring boot online course</a></strong> </p>
  <pre>@ConfigurationProperties(prefix = &quot;prefix&quot;)</pre>
  <pre>@Component</pre>
  <pre>public class SampleProperty {</pre>
  <pre>    private String stringProp1;</pre>
  <pre>    private String stringProp2;</pre>
  <pre>    @Max(99)</pre>
  <pre>    @Min(0)</pre>
  <pre>    private Integer intProp1;</pre>
  <pre>    private List&lt;String&gt; listProp;</pre>
  <pre>    private Map&lt;String, String&gt; mapProp;</pre>
  <pre>    ...</pre>
  <pre>}</pre>
  <p>At runtime, all the fields would be bound to the related properties cleanly.</p>
  <p>Additionally note the <a href="http://beanvalidation.org/1.0/spec/" target="_blank">JSR-303</a> annotations on top of the &quot;intProp1&quot; field that validates that the value of the field is between 0 and 99. @ConfigurationProperties will call the validator to ensure that the bound bean is validated.</p>
  <p>An integration test using this feature is shown here:</p>
  <pre>package prop;</pre>
  <pre>import org.junit.Test;</pre>
  <pre>import org.junit.runner.RunWith;</pre>
  <pre>import org.springframework.beans.factory.annotation.Autowired;</pre>
  <pre>import org.springframework.beans.factory.annotation.Value;</pre>
  <pre>import org.springframework.boot.test.SpringApplicationConfiguration;</pre>
  <pre>import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;</pre>
  <pre>import static org.hamcrest.MatcherAssert.assertThat;</pre>
  <pre>import static org.hamcrest.Matchers.*;</pre>
  <pre>@RunWith(SpringJUnit4ClassRunner.class)</pre>
  <pre>@SpringApplicationConfiguration(classes = SampleWebApplication.class)</pre>
  <pre>public class SamplePropertyLoadingTest {</pre>
  <pre>    @Autowired</pre>
  <pre>    private SampleProperty sampleProperty;</pre>
  <pre>    @Value(&quot;${prefix.stringProp1}&quot;)</pre>
  <pre>    private String stringProp1;</pre>
  <pre>    @Test</pre>
  <pre>    public void testLoadingOfProperties() {</pre>
  <pre>        System.out.println(&quot;stringProp1 = &quot; + stringProp1);</pre>
  <pre>        assertThat(sampleProperty.getStringProp1(), equalTo(&quot;propValue1&quot;));</pre>
  <pre>        assertThat(sampleProperty.getStringProp2(), equalTo(&quot;propValue2&quot;));</pre>
  <pre>        assertThat(sampleProperty.getIntProp1(), equalTo(10));</pre>
  <pre>        assertThat(sampleProperty.getListProp(), hasItems(&quot;listValue1&quot;, &quot;listValue2&quot;));</pre>
  <pre>        assertThat(sampleProperty.getMapProp(), allOf(hasEntry(&quot;key1&quot;, &quot;mapValue1&quot;),</pre>
  <pre>                hasEntry(&quot;key2&quot;, &quot;mapValue2&quot;)));</pre>
  <pre>    }</pre>
  <pre>}</pre>
  <p>If you are interested in exploring this sample further, Go through <strong><a href="https://onlineitguru.com/spring-boot-training.html" target="_blank">Spring Boot Online Training.</a></strong></p>

]]></content:encoded></item></channel></rss>