<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xmlns:tt="http://teletype.in/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>Siva Cynix it</title><generator>teletype.in</generator><description><![CDATA[Siva Cynix it]]></description><link>https://teletype.in/@onlineprogramming?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><atom:link rel="self" type="application/rss+xml" href="https://teletype.in/rss/onlineprogramming?offset=0"></atom:link><atom:link rel="next" type="application/rss+xml" href="https://teletype.in/rss/onlineprogramming?offset=10"></atom:link><atom:link rel="search" type="application/opensearchdescription+xml" title="Teletype" href="https://teletype.in/opensearch.xml"></atom:link><pubDate>Thu, 14 May 2026 02:04:51 GMT</pubDate><lastBuildDate>Thu, 14 May 2026 02:04:51 GMT</lastBuildDate><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/QMasT88AI</guid><link>https://teletype.in/@onlineprogramming/QMasT88AI?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/QMasT88AI?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Microsoft Azure Data Services Integration into ServiceNow</title><pubDate>Sat, 09 May 2020 06:14:03 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/41/c7/41c7a0eb-8921-4d6c-a981-8b33a2693286.png"></media:content><description><![CDATA[<img src="https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/160933i986AA4B79F2BE4BA/image-size/large?v=1.0&amp;px=999"></img>ServiceNow is a SaaS based application that provides service management software via multiple offerings such as: IT services management (ITSM), IT operations management (ITOM) and IT business management (ITBM). ServiceNow is a leader within the Gartner Magic Quadrant and holds a position in the top right of the quadrant as a visionary. ServiceNow ITSM provides the ability to track/monitor tickets created and resolve quickly and effectively.]]></description><content:encoded><![CDATA[
  <p>ServiceNow is a SaaS based application that provides service management software via multiple offerings such as: IT services management (ITSM), IT operations management (ITOM) and IT business management (ITBM). ServiceNow is a leader within the Gartner Magic Quadrant and holds a position in the top right of the quadrant as a visionary. ServiceNow ITSM provides the ability to track/monitor tickets created and resolve quickly and effectively.</p>
  <p>For users who want to analyze incidents and track progress, ServiceNow does provide out of the box reporting and analytics capabilities. In our use scenario, our customer wanted the ability to leverage an analytics platform like PowerBI in order to slice and dice and visualize the data in a variety of ways.</p>
  <p>More specifically, there was a need to understand cross functional ITSM impacts on other areas of the organization to develop a comprehensive view and explore potential metrics and KPIs through a single integrated data view. Hence, extracting data from ServiceNow into Azure and/or PowerBI integration into ServiceNow was established and a defined requirement.</p>
  <p>In addition, our customer wanted to create a POC to apply ML/AI on ITSM operations to understand root cause analysis and directly impact cost. There is a finite and measurable cost for each ticket created.</p>
  <p>The sooner major incidents can be identified and resolved, the more they would save on costs. ML/AI leveraged models that could identify root cause, minimize duplication of tickets, reduce time to resolution and further reduce the operational complexity of managing a large number of open tickets all resulting in a simplified, better managed process with a measurable cost savings. For more info <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Training</strong></a> </p>
  <p>This post will demonstrate the integration options between Azure and ServiceNow as well as leveraging Azure to apply AI/ML on some of the scenarios described earlier. We will share some of the learnings we had as we went through this journey. Various integration points between Azure and ServiceNow:</p>
  <p> </p>
  <ol>
    <li>Azure Data Factory to ServiceNow Integration</li>
    <li>Connect directly to ServiceNow with PowerBI using the SIMBA driver</li>
    <li>Leverage PowerBI Premium ML in order to execute models for use cases described above leveraging ServiceNow data</li>
  </ol>
  <p> </p>
  <p>At a high level, the solution will look as follows:</p>
  <p> </p>
  <p></p>
  <figure class="m_original">
    <img src="https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/160933i986AA4B79F2BE4BA/image-size/large?v=1.0&px=999" width="997" />
  </figure>
  <p> </p>
  <h2>Azure Data Factory ServiceNow Connector Integration</h2>
  <p>With Azure Data Factory, there are two integration options into ServiceNow:</p>
  <ol>
    <li>ServiceNow Connector out of the box or</li>
    <li>REST API Connector<br /><br /></li>
  </ol>
  <p><strong>ServiceNow Connector</strong></p>
  <p>In you Azure Data Factory, create a new connection and search for ServiceNow as shown below</p>
  <p></p>
  <figure class="m_original">
    <img src="https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/160934iF5B49F0FDC1E317C/image-size/medium?v=1.0&px=400" width="400" />
  </figure>
  <p>Configure the ServiceNow connectivity:</p>
  <p></p>
  <figure class="m_custom">
    <img src="https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/160962i8FAC345A67A9A2CD/image-dimensions/271x265?v=1.0" width="271" />
  </figure>
  <p><strong>Key takeaways from the ServiceNow connectivity option:</strong></p>
  <ul>
    <li>The connector is easy to configure and provides access to the out of the box tables and fields in ServiceNow.</li>
    <li>If you are looking for the Problems or Incidents table in ServiceNow, this connector is a great way to get started.</li>
    <li>However, in our scenario we had many user defined tables and fields in ServiceNow and this connection option does not yet support user defined types. Therefore, we went ahead and tested the next option, REST connection. <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Certification</strong></a><br /> </li>
  </ul>
  <p><strong>REST API Connector</strong></p>
  <p>In Azure Data Factory, create a new connection and search for REST as shown below</p>
  <p></p>
  <figure class="m_original">
    <img src="https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/160964iFB48177FC293775E/image-size/medium?v=1.0&px=400" width="400" />
  </figure>
  <p>Configure the REST API to your ServiceNow instance. </p>
  <p></p>
  <figure class="m_original">
    <img src="https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/160965iE7B13A8B07A1DE32/image-size/medium?v=1.0&px=400" width="399" />
  </figure>
  <p><strong>Key takeaways from the REST API connector option:</strong></p>
  <ul>
    <li>Uses the REST API access capabilities provided by ServiceNow</li>
    <li>We used basic authentication to provide REST API access</li>
    <li>The end point and relative URL constructed by using ServiceNow REST explorer. Please see the screenshot below from ServiceNow portal:</li>
  </ul>
  <p><strong><em>Login to ServiceNow and Search for “REST”</em></strong></p>
  <ul>
    <li><strong><em>You can use it to GET/CREATE/RETRIEVE/etc… records from sources.</em></strong></li>
  </ul>
  <p></p>
  <ul>
    <li>In our case, we want <em>Incidents</em> and <em>Problems</em> data so we can build some analytics and apply AI/ML to this data.</li>
  </ul>
  <p>As a side note, we found the Tables/Schemas documentation for ServiceNow that was relevant for the areas we were interested in like <em>Problems</em> leading to one or many Incidents. The ServiceNow models and schemas provided were a great way to understand what was available to us and what table elements we would need.</p>
  <p>Here is a sample of the Incidents table and its associated relationships:</p>
  <p></p>
  <figure class="m_original">
    <img src="https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/160990i5CCFAE4261A813CE/image-size/large?v=1.0&px=999" width="999" />
  </figure>
  <p> </p>
  <p>Once you’ve established the linked service, you can even filter using the REST APIs as shown below.</p>
  <p></p>
  <figure class="m_original">
    <img src="https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/160991i11A6D5A980D1195E/image-size/large?v=1.0&px=999" width="999" />
  </figure>
  <p><strong>Key takeaways from configuring the REST API connector:</strong></p>
  <ul>
    <li>REST API call allows data filtering; we can use the ServiceNow REST explorer to construct the relative URL with extra parameters including data filters.</li>
    <li>The relative URL can be dynamically constructed by using Azure Data Factory expressions, functions and system variables. In our case, we are only interested in the last 365 days of Incidents <strong>(adddays(utcnow(‘yyyy-MM-dd’),-364)…</strong></li>
  </ul>
  <p></p>
  <ul>
    <li>If you click on “Mappings”, you can see with the REST API user defined fields like “u_rca_status” and “u_major_incident”</li>
    <li>In our scenario, we want to extract data from ServiceNow and put it into Azure SQL PaaS instance so we can execute queries without impacting ServiceNow. Along the way, we discovered ADF natively translates the JSON schema which is mapped to the target table</li>
  </ul>
  <h2>PowerBI Desktop to ServiceNow via SIMBA driver</h2>
  <p>We discovered that PowerBI provided a ServiceNow app that provided an out of the box dashboard into ServiceNow however after we discovered it, it was subsequently removed as an option and hence we had to look for alternative connectivity options like the SIMBA driver. To get more skills enroll for <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Sericenow Online Training</strong></a></p>
  <p>The SIMBA is a 3rd party vendor that driver provides JDBC/ODBC connectivity to ServiceNow. We installed the SIMBA driver and was able to connect relatively easily to ServiceNow using an JDBC/ODBC driver. The SIMBA driver was also able to read the user defined tables and columns as the Azure Data Factory REST API described earlier</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/uLP_WBq6x</guid><link>https://teletype.in/@onlineprogramming/uLP_WBq6x?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/uLP_WBq6x?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Analytics, Intelligence, and Reporting in Servicenow</title><pubDate>Fri, 08 May 2020 11:23:55 GMT</pubDate><description><![CDATA[Optimize processes and increase productivity with Performance Analytics, virtual agents, and machine learning.]]></description><content:encoded><![CDATA[
  <p>Optimize processes and increase productivity with Performance Analytics, virtual agents, and machine learning.</p>
  <p>The Analytics, Intelligence, and Reporting products help provide many different kinds of data to many different kinds of users. Listen to the Story of the Four Stakeholders, which describes the kinds of data different people in your organization might want to see.</p>
  <h2>Get the insights you need</h2>
  <p>The Analytics, Intelligence, and Reporting products can help you to lower costs and increase productivity through process improvement, self-service, and automation. Service owners can deliver and refine AI capabilities quickly, gaining greater insight into real-time patterns and trends for service delivery teams.</p>
  <p>This information enables you to make better, faster decisions—without the need for data science expertise. For more info <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Training</strong></a></p>
  <h2>Performance Analytics:</h2>
  <p>Performance Analytics</p>
  <p>enables businesses to set, track, and analyze progress against goals. The products helps you improve performance and accelerate continual service improvement by:</p>
  <ul>
    <li>Tracking critical process metrics and trends.</li>
    <li>Measuring process health and behavior against organizational targets.</li>
    <li>Identifying process patterns and potential bottlenecks before they occur.</li>
    <li>Continually visualizing the health of processes through both historical and real-time statistics in role-based dashboards, so you and your business can make informed decisions.</li>
  </ul>
  <h2>Spotlight:</h2>
  <p>Spotlight illuminates records that otherwise you might overlook due to evaluating only one aspect of given records. You can define weighted criteria to identify and rank records that require attention, such as when triaging incidents or performing lead scoring. You can rank records based on multiple dimensions, instead of by a single field value such as priority. While most organizations address high-priority items in a timely manner, lower priority items sometimes are not addressed for an extended period of time. Spotlight helps you focus on items based on business need. learn more from <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Developer Training</strong></a></p>
  <h2>Virtual Agent:</h2>
  <p>Implementing a virtual agent to handle common requests and tasks enables your users to get immediate help, day or night. Providing your virtual agent on channels familiar to your users, such as third-party messaging apps, to offer a convenient way for them to contact you. A virtual agent can also offer personalized customer experiences by applying and remembering user information during the conversation.</p>
  <h2>Natural Language Understanding:</h2>
  <p>Natural Language Understanding (NLU) provides an NLU model builder and an NLU inference service that enable the system to learn and respond to human-expressed intent. By entering natural language examples into the system, you help it evaluate word meanings and contexts so it can infer user or system actions.</p>
  <h2>Predictive Intelligence</h2>
  <p>Predictive Intelligence uses machine learning to shorten triage and categorization time, contributing to higher customer satisfaction. Pinpoint issues and deliver actionable insights to get service owners and agents to faster resolutions.</p>
  <h2>Get started</h2>
  <ul>
    <li>Attend the &quot;Getting Started with Performance Analytics&quot; webinar to learn how you can improve performance by visualizing critical metrics and trends. Discover how you can use Performance Analytics to get real-time insight into influential factors in each stage of your service to help you meet and exceed your business goals.</li>
    <li>For information about free classes, office hours, and other Performance Analytics resources. For more depth knowledge, enroll for <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Online Training</strong></a></li>
    <li>Pre-packaged Analytics and Reporting Solutions are available to integrate Analytics, Intelligence, and Reporting tools with your ITSM, CSM, or HR ServiceNow products. </li>
  </ul>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/7y2urOkrF</guid><link>https://teletype.in/@onlineprogramming/7y2urOkrF?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/7y2urOkrF?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Use Pandas for ETL: Experience and Practical Tips</title><pubDate>Thu, 07 May 2020 07:12:43 GMT</pubDate><description><![CDATA[<img src="https://teletype.in/files/eb/36/eb363d2e-549e-4227-a870-32a5b6adfb85.png"></img>This post talks about my experience of building a small scale ETL with Pandas. It also offers some hands-on tips that may help you build ETLs with Pandas.]]></description><content:encoded><![CDATA[
  <h1>Introduction</h1>
  <p>This post talks about my experience of building a small scale ETL with Pandas. It also offers some hands-on tips that may help you build ETLs with Pandas.</p>
  <p><strong>Background</strong>: Recently, I was tasked with importing multiple data dumps into our database. The data dumps came from different source, e.g., clients, web. We were lucky that all of our dumps were small, with the largest were under 20 GB. Also, the data sources were updated quarterly, or montly at most, so the ETL doesn’t have to be real time, as long as it could re-run.</p>
  <h1>SQL vs. Pandas</h1>
  <p>For simple transformations, like one-to-one column mappings, caculating extra columns, SQL is good enough.</p>
  <p>However, for more complex tasks, e.g., row deduplication, splitting a row into multiple tables, creating new aggregate columns with on custom group-by logic, implementing these in SQL can lead to long queries, which could be hard to read or maintain.</p>
  <blockquote><em>Python and Pandas are great for many use cases, but Pandas becomes an issue when the datasets get large because it’s grossly inefficient with RAM.</em></blockquote>
  <p>In our case, since the data dumps are not real-time, and small enough to run locally, simplicity is something we want to optimize for.</p>
  <p>Our reasoning goes like this: Since part of our tech stack is built with Python, and we are familiar with the language, using Pandas to write ETLs is just a natural choice besides SQL.</p>
  <p>Writing ETL in a high level language like Python means we can use the operative programming styles to manipulate data. For more info <a href="https://onlineitguru.com/etl-testing-training.html" target="_blank"><strong>ETL Testing Online</strong></a></p>
  <figure class="m_original">
    <img src="https://teletype.in/files/eb/36/eb363d2e-549e-4227-a870-32a5b6adfb85.png" width="284" />
  </figure>
  <h1>Useful Pandas functions</h1>
  <p>Most of my ETL code revolve around using the following functions:</p>
  <ul>
    <li><code>drop_duplicates</code></li>
    <li><code>dropna</code></li>
    <li><code>replace</code> / <code>fillna</code></li>
    <li><code>df[df[&#x27;column&#x27;] != value]</code>: filtering</li>
    <li><code>apply</code>: transform, or adding new column</li>
    <li><code>merge</code>: SQL like inner, left, or right join</li>
    <li><code>groupby</code></li>
    <li><code>read_csv</code> / <code>to_csv</code></li>
  </ul>
  <p>Functions like <code>drop_duplicates</code> and <code>drop_na</code> are nice abstractions and save tens of SQL statements.</p>
  <p>And <code>replace</code> / <code>fillna</code> is a typical step that to manipulate the data array.</p>
  <p>One thing that I need to wrap my head around is filtering. Writing</p>
  <pre>df[df[&#x27;column&#x27;] != value]</pre>
  <p>was a bit awkward at first. This has to do with Python and the way it overrides operators like <code>[]</code>. I haven’t peeked into Pandas implementation, but I imagine the class structure and the logic needed to implement the <code>__getitem__</code> method.</p>
  <h1>Pandas + Jupyter Notebook</h1>
  <p>Data processing is often exploratory at first. This is especially true for unfamiliar data dumps. We need to see the shape / columns / count / frequencies of the data, and write our next line of code based on our previous output. So the process is iterative.</p>
  <p>One tool that Python / Pandas comes in handy is <a href="https://onlineitguru.com/etl-testing-training.html" target="_blank"><strong>ETL Testing Course</strong></a>. It’s like a Python shell, where we write code, execute, and check the output right away. However, it offers a enhanced, modern web UI that makes data exploration more smooth.</p>
  <p>Also, for processing data, if we start from a <code>etl.py</code> file instead of a notebook, we will need to run the entire <code>etl.py</code> many times because of a bug or typo in the code, which could be slow.</p>
  <p>In Jupyter notebook, processing results are kept in memory, so if any section needs fixes, we simply change a line in that seciton, and re-run it again. There is no need to re-run the whole notebook (Note: to be able to do so, we need good conventions, like no reused variable names, see my discussion below about conventions).</p>
  <p>My workflow was usually to start with notebook, create a a new section, write a bunch of pandas code, print intermediate results, and keep the output as reference, and move on to write next section. Eventually, when I finish all logic in a notebook, I export the notebook as <code>.py</code> file, and delete the notebook.</p>
  <p>While writing code in jupyter notebook, I established a few conventions to avoid the mistakes I often made.</p>
  <ul>
    <li>Avoid global variables; no reused variable names across sections.</li>
    <li>Avoid writing logic in root level; Wrap them in functions so that they can reused.</li>
    <li>After seeing the output, write down the findings in code comments before starting the section. Doing so helps clear thinking and not miss some details</li>
  </ul>
  <h1>Stablizing IDs</h1>
  <p>When doing data processing, it’s common to generate UUIDs for new rows. For debugging and testing purposes, it’s just easier that IDs are deterministic between runs. In other words, running ETL the 2nd time shouldn’t change all the new UUIDs.</p>
  <p>To support this, we save all generated ids for a temporary file, e.g., <code>generated/ids.csv</code>. This file is often the mapping between the old primary key to the newly generated UUIDs. We sort the file based on old primary key column and commit it into git.</p>
  <blockquote>To get in-depth knowledge, enroll for a live free demo on <a href="https://onlineitguru.com/etl-testing-training.html" target="_blank"><strong>ETL Testing Training</strong></a></blockquote>
  <p>This way, whenever we re-run the ETL again and see changes to this file, the diffs will us what get changed and help us debug.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/xvk2HNLcN</guid><link>https://teletype.in/@onlineprogramming/xvk2HNLcN?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/xvk2HNLcN?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Salesforce Analytics Cloud Connector - Mule 4</title><pubDate>Wed, 06 May 2020 05:32:13 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/dc/89/dc897cda-cac4-46d3-9a44-6d22ef25ffb1.png"></media:content><description><![CDATA[<img src="https://docs.mulesoft.com/connectors/_images/salesforce/salesforce-analytics-dc-basic-auth.png"></img>Anypoint Connector for Salesforce Analytics Cloud (Salesforce Analytics Connector) enables you to connect to the Salesforce Analytics Cloud application using the Salesforce External Data API. The connector exposes convenient methods for creating, deleting and populating data sets into Salesforce Analytics Cloud system. Load data into Analytics Cloud from many different data sources whether they are on-premise or on the cloud. Go beyond .csv files with this connector.]]></description><content:encoded><![CDATA[
  <p>Anypoint Connector for Salesforce Analytics Cloud (Salesforce Analytics Connector) enables you to connect to the Salesforce Analytics Cloud application using the Salesforce External Data API. The connector exposes convenient methods for creating, deleting and populating data sets into Salesforce Analytics Cloud system. Load data into Analytics Cloud from many different data sources whether they are on-premise or on the cloud. Go beyond .csv files with this connector.</p>
  <h2>Prerequisites</h2>
  <p>To use this information, you should be familiar with Salesforce Analytics, Mule, Anypoint Connectors, Anypoint Studio, Mule concepts, elements in a Mule flow, and Global Elements.</p>
  <p>You need login credentials to test your connection to your target resource.</p>
  <h2>POM File Information</h2>
  <p>&lt;dependency&gt;</p>
  <p>&lt;groupId&gt;</p>
  <p>com.mulesoft.connectors</p>
  <p>&lt;/groupId&gt;</p>
  <p>&lt;artifactId&gt;</p>
  <p>mule-sfdc-analytics-connector</p>
  <p>&lt;/artifactId&gt;</p>
  <p>&lt;version&gt;</p>
  <p>RELEASE</p>
  <p>&lt;/version&gt;</p>
  <p>&lt;classifier&gt;</p>
  <p>mule-plugin</p>
  <p>&lt;/classifier&gt;</p>
  <p>&lt;/dependency&gt;</p>
  <p>Mule converts RELEASE to the latest version.</p>
  <p>To specify a version, view Salesforce Einstein Analytics Connector in Anypoint Exchange and click <strong>Dependency Snippets</strong>. For more info <a href="https://onlineitguru.com/mulesoft-training.html" target="_blank"><strong>Mulesoft Training</strong></a></p>
  <h2>Connect in Design Center</h2>
  <ol>
    <li>In Design Center, click a trigger such as this connector as they trigger, HTTP Listener,or Scheduler trigger.</li>
    <li>To create an optional global element for the connector, you can choose from the following options. More information is provided in the sections that follow, and links to Salesforce documents are listed in the See Also section of this document.</li>
    <ul>
      <li>Required Parameters for Basic Username Password Authentication</li>
      <li>Required Parameters for the OAuth 2.0 Configuration</li>
      <li>Required Parameters for the OAuth 2.0 JWT Bearer Configuration</li>
      <li>Required Parameters for the OAuth 2.0 SAML Bearer Configuration</li>
    </ul>
    <li>Select the plus sign to add a component.</li>
    <li>Select the connector as a component.</li>
    <li>Configure these fields for Upload external data into new dataSet and start processing operation:</li>
    <ul>
      <li>Type - Type of the records to be inserted. You need to upload a JSON file representing the schema of the dataset to be created.</li>
      <li>Records - Data sense expression; the records to be inserted</li>
      <li>Operation - Which operation to use when you’re loading data into the DataSet</li>
      <li>Description</li>
      <li>Label</li>
      <li>Data Set Name</li>
    </ul>
  </ol>
  <h3>Required Parameters for Basic Username Password Authentication</h3>
  <ul>
    <li>Username: Enter the Salesforce Analytics username.</li>
    <li>Password: Enter the corresponding password.</li>
  </ul>
  <figure class="m_original">
    <img src="https://docs.mulesoft.com/connectors/_images/salesforce/salesforce-analytics-dc-basic-auth.png" width="606" />
  </figure>
  <h3>Required Parameters for the OAuth 2.0 Configuration</h3>
  <ul>
    <li>Consumer Key - The consumer key for the Salesforce connected app.</li>
    <li>Consumer Secret - The consumer secret for the connector to access Salesforce.</li>
  </ul>
  <figure class="m_original">
    <img src="https://docs.mulesoft.com/connectors/_images/salesforce/salesforce-analytics-dc-oauth.png" width="494" />
  </figure>
  <h3>Required Parameters for the OAuth 2.0 JWT Bearer Configuration</h3>
  <ul>
    <li>Consumer Key - The consumer key for the Salesforce connected app.</li>
    <li>Keystore File - See Generating a Keystore File.</li>
    <li>Store Password - The password for the keystore.</li>
    <li>Principal - The Salesforce username that you want to use.</li>
  </ul>
  <figure class="m_original">
    <img src="https://docs.mulesoft.com/connectors/_images/salesforce/salesforce-analytics-dc-jwt.png" width="668" />
  </figure>
  <h4>Required Parameters for the OAuth 2.0 SAML Bearer Configuration</h4>
  <ul>
    <li>Consumer Key - The consumer key for the Salesforce connected app.</li>
    <li>Keystore File - The path to the key store used to sign data during authentication* Only Java key store format is allowed.</li>
    <li>Store Password - Key store password</li>
  </ul>
  <figure class="m_original">
    <img src="https://docs.mulesoft.com/connectors/_images/salesforce/salesforce-analytics-saml.png" width="598" />
  </figure>
  <h4>Generating a Keystore File</h4>
  <p>The Keystore is the path to the keystore used to sign data during authentication. Only Java keystore format is allowed.</p>
  <p>To generate a keystore file:</p>
  <ol>
    <li>Go to your Mule workspace, and open the command prompt (for Windows) or Terminal (for Mac).</li>
    <li>Type <code>keytool -genkeypair -alias salesforce-cert -keyalg RSA -keystore salesforce-cert.jks</code> and press enter.</li>
    <li>Enter the following details:</li>
    <ul>
      <li>Password for the keystore.</li>
      <li>Your first name and last name.</li>
      <li>Your organization unit.</li>
      <li>Name of your city, state, and the two letter code of your county.The system generates a java keystore file containing a private/public key pair in your workspace.</li>
    </ul>
    <li>Provide the file path for the Keystore in your connector configuration.Type <code>keytool -exportcert -alias salesforce-cert -file salesforce-cert.crt -keystore salesforce-cert.jks</code> and press enter.The system now exports the public key from the keystore into the workspace. This is the public key that you need to enter in your Salesforce instance.</li>
    <li>Make sure that you have both the keystore (salesforce-cert.jks) and the public key (salesforce-cert.crt) files in your workspace.</li>
  </ol>
  <h2>Add the Connector to a Studio Project</h2>
  <p>Anypoint Studio provides two ways to add the connector to your Studio project: from the Exchange button in the Studio taskbar or from the Mule Palette view.</p>
  <h3>Add the Connector Using Exchange</h3>
  <ol>
    <li>In Studio, create a Mule project.</li>
    <li>Click the Exchange icon <strong>(X)</strong> in the upper-left of the Studio task bar.</li>
    <li>In Exchange, click <strong>Login</strong> and supply your Anypoint Platform username and password.</li>
    <li>In Exchange, search for &quot;analytics&quot;.</li>
    <li>Select the connector and click <strong>Add to project</strong>.</li>
    <li>Follow the prompts to install the connector.</li>
  </ol>
  <h3>Add the Connector in Studio</h3>
  <ol>
    <li>In Studio, create a Mule project.</li>
    <li>In the Mule Palette view, click <strong>(X) Search in Exchange</strong>.</li>
    <li>In <strong>Add Modules to Project</strong>, type &quot;analytics&quot; in the search field.</li>
    <li>Click this connector’s name in <strong>Available modules</strong>.</li>
    <li>Click <strong>Add</strong>.</li>
    <li>Click <strong>Finish</strong>.</li>
  </ol>
  <h3>Configure in Studio</h3>
  <ol>
    <li>Drag the connector to the Studio canvas.</li>
    <li>To create a global element for the connector, set these fields:</li>
    <ul>
      <li>Basic Authentication:</li>
      <ul>
        <li>Username: Enter the Salesforce username.</li>
        <li>Password: Enter the corresponding password.</li>
        <li>Security Token: Enter the corresponding security token.</li>
      </ul>
      <li>OAuth 2.0:</li>
      <ul>
        <li>Consumer Key - The consumer key for the Salesforce connected app.</li>
        <li>Consumer Secret - The consumer secret for the connector to access Salesforce.</li>
      </ul>
      <li>OAuth 2.0 JWT:</li>
      <ul>
        <li>Consumer Key - The consumer key for the Salesforce connected app.</li>
        <li>Keystore File - See Generating a Keystore File.</li>
        <li>Store Password - The password for the keystore.</li>
        <li>Principal - The password for the keystore.</li>
      </ul>
      <li>OAuth 2.0 SAML:</li>
      <ul>
        <li>Consumer Key - The consumer key for the Salesforce connected app.</li>
        <li>Keystore File - See Generating a Keystore File.</li>
        <li>Store Password - The password for the keystore.</li>
        <li>Principal - The password for the keystore.</li>
      </ul>
    </ul>
    <li>Configure these fields for upload external data into new dataSet and start processing operation:</li>
  </ol>
  <blockquote>To get in-depth knowledge, enroll for a live free demo on <a href="https://onlineitguru.com/mulesoft-training.html" target="_blank"><strong>Mulesoft Online Training</strong></a></blockquote>
  <ol>
    <ul>
      <li>Type - Type of the records to insert. Select a JSON file representing the schema of the dataset to be created.</li>
      <li>Records - DataSense expression - the records to be inserted.</li>
      <li>Operation - Which operation to use when you’re loading data into a data set.</li>
      <li>Description</li>
      <li>Label</li>
      <li>Data Set Name</li>
    </ul>
  </ol>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/yow4ZQb93</guid><link>https://teletype.in/@onlineprogramming/yow4ZQb93?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/yow4ZQb93?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Tableau Architecture &amp; Server Components</title><pubDate>Mon, 04 May 2020 10:59:06 GMT</pubDate><description><![CDATA[<img src="https://teletype.in/files/7e/bf/7ebf5082-2b9c-4da9-bac8-3dea6b5b5449.png"></img>Tableau Server is designed in a way to connect many data tiers. It can connect clients from desktop, mobile, and web. Tableau Desktop is a robust data visualization tool. It is highly available and secure.]]></description><content:encoded><![CDATA[
  <p>Tableau Server is designed in a way to connect many data tiers. It can connect clients from desktop, mobile, and web. Tableau Desktop is a robust data visualization tool. It is highly available and secure.</p>
  <p>It can run on both virtual and physical machines. It is a multi-user, multi-process and multi-threaded system.</p>
  <h2>Tableau Server Architecture</h2>
  <p>The various layers used in the Tableau server are given in the following architecture diagram</p>
  <p>Let&#x27;s study the different components of Tableau Architecture</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/7e/bf/7ebf5082-2b9c-4da9-bac8-3dea6b5b5449.png" width="528" />
  </figure>
  <p><strong>Data Server</strong></p>
  <p>The primary component of Tableau Architecture is the Data sources it can connect to it.</p>
  <p>Tableau can connect to multiple data sources. These data sources can be on-premise or remotely located. It can connect to a database, excel file, and a web application all at the same time. Tableau can connect data from heterogeneous environments. It can blend the data from multiple data sources. It can also make the relationship between various types of data sources. Learn more from <a href="https://onlineitguru.com/tableau-training" target="_blank"><strong>Tableau Training</strong></a></p>
  <p><strong>Data Connectors</strong></p>
  <p>The Data Connectors provide an interface to connect external data sources to Tableau Data Server.</p>
  <p>Tableau has in-built ODBC/SQL connector. This ODBC Connector can connect to any databases without using their native connector. Tableau has an option to select both live and extract data. Based on the usage, one can be easily switched between extracted and live data.</p>
  <ul>
    <li><strong>Live Connection or Real time data: </strong>Tableau can connect to real time data by linking to the external database directly. It uses the infrastructure of existing database system by sending dynamic MDX (Multidimensional Expressions) and SQL statements. This feature can link to the live data with Tableau rather than importing the data. It makes good the investment done by an organization on a fast and optimized database system. In many enterprises, the size of the database is huge and is updated periodically. In those cases, Tableau works as a front-end visualization tool by connecting to the live data.</li>
  </ul>
  <p><strong>Components of Tableau Server</strong></p>
  <p>The different components present in a Tableau server are:</p>
  <ul>
    <li>Application Server</li>
    <li>VizQL Server</li>
    <li>Data Server</li>
  </ul>
  <p><strong>A) Application Server:</strong></p>
  <p>The application server is used to provide the authentications and authorizations. It handles the administration and permission for web and mobile interfaces. It assures security by recording each session id on Tableau Server. The administrator can configure the default timeout of the session in the server.</p>
  <p><strong>B) VizQL Server:</strong></p>
  <p>VizQL server is used to convert the queries from the data source into visualizations. Once the client request is forwarded to VizQL process, it sends the query directly to data source and retrieves information in the form of images. This image or visualization is presented to the user. Tableau server creates a cache of visualization to reduce the load time. The cache can be shared across many users who have the permission to view the visualization.</p>
  <p><strong>C) Data Server:</strong></p>
  <p>Data server is used to manage and store the data from external data sources. It is a central data management system. It provides metadata management, data security, data storage, data connection and driver requirements. It stores the relevant details of data set such as metadata, calculated fields, sets, groups, and parameters. The data source could extract data as well make live connections to external data sources.</p>
  <p><strong>Gateway</strong></p>
  <p>The gateway channelizes the requests from users to Tableau components. When the client makes a request, it is forwarded to external load balancer for processing. The gateway works as a distributor of processes to various components. In case of absence of external load balancer, gateway also works as a load balancer.</p>
  <p>For single server configuration, one primary server or gateway manages all the processes. For multiple server configurations, one physical system works as primary server while others are used as worker servers. Only one machine can be used as a primary server in Tableau Server environment.</p>
  <blockquote>To get in-depth knowledge, enroll for a live free demo on <a href="https://onlineitguru.com/tableau-training" target="_blank"><strong>Tableau Online Course</strong></a></blockquote>
  <p><strong>Clients</strong></p>
  <p>The dashboards and visualizations in Tableau server can be viewed and edited using different clients. The Clients are Tableau Desktop, web browser and mobile applications.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/Af1snkWCa</guid><link>https://teletype.in/@onlineprogramming/Af1snkWCa?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/Af1snkWCa?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Tableau on Hadoop get more information</title><pubDate>Sat, 02 May 2020 07:34:30 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/cf/9f/cf9fd1cd-b492-436d-8600-470e50e757ce.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/19/e1/19e14e17-77e5-4f5a-a076-6eaf0534d603.png"></img>Tableau, on the similar pace, has been a winner in Data Visualization connecting business users with the intricacies of data and helping them to present it such that the data is easy-to-understand and gives meaningful information to the clients and users.]]></description><content:encoded><![CDATA[
  <p>Tableau, on the similar pace, has been a winner in Data Visualization connecting business users with the intricacies of data and helping them to present it such that the data is easy-to-understand and gives meaningful information to the clients and users.</p>
  <ul>
    <li>When you work with Hadoop and Tableau, you can connect to your Hadoop cluster in real-time and then extract the data from Tableau’s fast in-memory data engine.</li>
    <li>This prevents you to suffer from high latency of Hadoop.</li>
    <li>It gives you fast, direct connect to your databases.</li>
    <li>The swift data functions are a result of a powerful query process such that users extract data without waiting for the MapReduce queries to compile and execute.</li>
    <li>The in-memory data storage in Hadoop speeds up the slow databases available and further accelerates the overall visualization and reporting process in Tableau.</li>
    <li>The direct database connection in Tableau serves as a significant benefit to everyday users as they can leverage the potential of Hadoop in a familiar and easy-to-use Tableau interface.</li>
  </ul>
  <p>The company partners with Tableau to provide excellent data solutions to business users at 100X the performance, having enterprise-grade security, without the overhead of moving, transforming and sampling data for analysis. This indeed relieves business groups and data experts by introducing a way to query billions of rows at top speed and an all-in-one solution for all that will meet your business on Hadoop needs.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/19/e1/19e14e17-77e5-4f5a-a076-6eaf0534d603.png" width="930" />
  </figure>
  <p>Connecting Tableau directly to Hadoop is one of the biggest achievements to drive your business today and handle the big data explosion at the same time. The primary advantage of this integration is now you can take all your data at one location and still access it in a more secure, efficiently performing and controlled manner with the logics and syntactic the businesses people can understand and implement.</p>
  <h2><strong>Benefits of using Tableau on Hadoop</strong></h2>
  <p>With the announcement of AtScale joining hands with Tableau to work it on the Hadoop platform, all those Data Analysts, Business Intelligence Professionals, Big Data and Hadoop aspirants can now learn to work with Tableau and Hadoop together. Soon, you can expect a bulk of open job positions for the individuals having dexterity in both the data technologies keeping up with one another. Get more from <a href="https://onlineitguru.com/tableau-server-training.html" target="_blank"><strong>Tableau Server training</strong></a></p>
  <p>But what are the real-time benefits of using the data visualization tool on the Hadoop distributed file system (HDFS)? In context to this brilliant invention, a Tableau partner said, “<em>When looking to propose solutions to our clients, we look for innovative, proven solutions that deliver tangible value. The AtScale approach is well aligned with Tableau’s strategy, both from a business and a technology standpoint. We are excited to take this offer to market!</em>’</p>
  <p>The Tableau ‘BI on Hadoop’ invention will provide security, efficiency and speed to the big data processes to be handled with Hadoop, supporting all Hadoop distributions. Customers and clients can query Hadoop in seconds without going through the local setup and installation.</p>
  <blockquote>To get in-depth knowledge, enroll for a live free demo on <a href="https://onlineitguru.com/tableau-training" target="_blank"><strong>Tableau Online Training</strong></a></blockquote>
  <p>Ultimately, all paths of learning and using Tableau and Hadoop together will lead us to achieve a seamless, fast and competent data analytics in the enterprises, letting us gain tractions in the business.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/djPTZRohW</guid><link>https://teletype.in/@onlineprogramming/djPTZRohW?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/djPTZRohW?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Integration Techniques in Service Now</title><pubDate>Fri, 01 May 2020 06:40:40 GMT</pubDate><description><![CDATA[<img src="https://teletype.in/files/cb/67/cb6773c2-d2b5-4b99-b80d-a008d04450c0.jpeg"></img>This is going to be a technical discussion, and a long post. In this post we will look at the different integration techniques and would help you finally pick the best one based on its Pros and Cons.]]></description><content:encoded><![CDATA[
  <p>This is going to be a technical discussion, and a long post. In this post we will look at the different integration techniques and would help you finally pick the best one based on its Pros and Cons.</p>
  <p>This post will be divided into two sections. The first section is about <em>Initial Setup,</em> and the second section is about the <em>Freeflow of updates </em>between Service Now and any other tool Service Now is integrating with.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/cb/67/cb6773c2-d2b5-4b99-b80d-a008d04450c0.jpeg" width="284" />
  </figure>
  <p><strong>Initial Setup:</strong></p>
  <p>Initial setup is a phase where there is no data in Service Now, and you need to get in entire dump of data from the other tool.</p>
  <p> </p>
  <p><strong>Free flow of Updates:</strong></p>
  <p>Here you are more concerned about how to get the data that is Updated/inserted in the other tool, into Service Now in real time. For example, you need to quickly get the Incident created/updated in NetCool into Service Now.</p>
  <p>Different ways we can do it:</p>
  <p><em>JDBC/FTP(s)/HTTP(s)</em></p>
  <p>JDBC offers a very robust way to connect directly to a Database like SQL/ Oracle using drivers and pull the information into Service Now. You can even give in your SQL query in the Service Now which will be executed and the data be fetched.</p>
  <p><em>Pros:</em></p>
  <p>It’s a kick ass way of getting the updates between two Databases at a time. The fastest when it comes to setting up an Initial dump of Data. You will be directly talking to the Database, with no middle man. I would go for it without thinking twice. It’s fast and secure.</p>
  <p><em>Cons:</em></p>
  <p>It really is very lethargic when it comes to free flow of updates. Say something got changed in the other tool, JDBC cannot tell you until the schedule that picks the data runs in Service Now. So it’s NOT recommended for free flow of updates.</p>
  <p>I wouldn&#x27;t talk much about the good ol’ FTP(s) and HTTPS(s). Both are excellent ways of getting the information into Service Now. They operate the same way as JDBC but may be slow when compared to JDBC because we are taking the entire shebang at the Application Layer ( over HTTP and FTP ). Other than that, I don’t see any other reason on why I don’t use FTP, HTTP. It like JDBC suffers from the same disadvantage when it comes to free flow of updates. Get more from <a href="https://onlineitguru.com/servicenow-admin-training" target="_blank"><strong>Servicenow Admin training</strong></a></p>
  <p> </p>
  <p><em><strong>Web services:</strong></em></p>
  <p>Web services come in twos. SOAP and REST.</p>
  <p>Both SOAP and REST are very good when you are using them for Freeflow of updates. There are many designs when it comes to SOAP and REST. Let’s look at them one by one.</p>
  <p><em>SOAP:</em></p>
  <p>You are presented with three design options here :</p>
  <p>a) Using Direct Web services.</p>
  <p>b) Using Scripted Web services.</p>
  <p>c) Using an Inbound table. Learn more from <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Developer Training</strong></a></p>
  <p><strong>a) Using Direct Web services:</strong></p>
  <p>The advantage the Direct Web services offer you is the speed. When you use a Direct Web service, you directly insert into a table, and you get the response. There are many disadvantages though, some of them are</p>
  <p>1.     You cannot customize the output in case of an error.</p>
  <p>2.   You are directly exposing your Production tables.</p>
  <p>3.   You cannot log the request you received from the external tool.</p>
  <p>4.   You cannot talk to more than one table at a time.</p>
  <p>I wouldn&#x27;t take this approach, unless I have no option. For more info <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Training</strong></a></p>
  <p><strong>b) Using Scripted Web services</strong></p>
  <p>The advantages of Scripted Web services:</p>
  <p>1.     You can talk to more than one table at a time.</p>
  <p>2. You can log the request you received.</p>
  <p>3. You can customize the output, and this will require some customization on your part.</p>
  <p>4. You can hide the Production table, by creating a Staging table in the middle, and once the data is in Staging table, write the logic on the Staging table. Don’t get confused that Staging table here is the Import set table. No, it is not. We will discuss the Import set tables in the next bullet.</p>
  <p>The main disadvantage of Scripted Web service is the speed. Apart from that, All is well here. Oh, you can get only one SOAP endpoint per one Scripted Webservice, which sucks. Where as using Direct Webservices/Import set table, you get whole lot of SOAP endpoints to Update/insert/delete/get etc.,</p>
  <p><strong>c) Using an Import Set table:</strong></p>
  <p>If you ask me this is the best way when you go the SOAP way. Instead of exposing a Production table, you expose an Intermediate Import set table, that takes the input and gives the output. You have all the flexibilty to customize the output, and it’s fast as well because you are using the OOB setup to write Transform Maps to move the code into Production tables. To get more info <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Online Training</strong></a></p>
  <p><em>REST:</em></p>
  <p>My personal favorite, as REST is one the most flexible specifications that is there out there when it works with JSON.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/VJPgt3Iaf</guid><link>https://teletype.in/@onlineprogramming/VJPgt3Iaf?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/VJPgt3Iaf?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>ETL Test Automation Planning for DW/BI Projects</title><pubDate>Thu, 23 Apr 2020 11:07:54 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/69/8b/698b3e78-ec27-4d48-aa5e-99b937a3c85f.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/18/fc/18fcaabb-1492-4da5-b19d-888134978313.png"></img>As businesses create (and need) more data than ever before, the sheer number of BI failures threatens to grow exponentially. This could have a far-reaching impact on the underlying digital transformation initiatives that these projects are designed to enable.]]></description><content:encoded><![CDATA[
  <p>As businesses create (and need) more data than ever before, the sheer number of BI failures threatens to grow exponentially. This could have a far-reaching impact on the underlying digital transformation initiatives that these projects are designed to enable.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/18/fc/18fcaabb-1492-4da5-b19d-888134978313.png" />
  </figure>
  <p>Given that companies are releasing new applications faster than ever — some releasing updates on demand and multiple times a day -—too many organizations are using manual ETL test processes and the wrong tools to manage critical parts of releases for highly visible, often customer-facing, applications.</p>
  <p>That translates into risk to customer loyalty, the brand, confidential data ——nd worse. For more details <a href="https://onlineitguru.com/etl-testing-training.html" target="_blank"><strong>ETL Testing Training</strong></a></p>
  <p>This article explores how applying DevOps-style test automation to DW/BI and other data integration projects can guarantee a high level of data quality -- instilling the trust that is essential for the success of BI projects and the digital transformation initiatives that are ultimately driving them.</p>
  <p><strong>Taking a DevOps Approach to DW/BI Testing</strong></p>
  <p>DevOps, with its focus on tool automation across the entire development life cycle, addresses an enormous challenge for big data and DW/BI developers. Many of today&#x27;s big data and DW/BI projects are already leveraging (or actively planning to adopt) agile and DevOps processes — but not for testing.</p>
  <p>DW/BI projects in general are not currently using automated testing tools to the extent that is needed for project successes. Perhaps this is because they believe the required testing functions are not commercially available or are too complex and expensive to develop in-house. Learn from <a href="https://onlineitguru.com/etl-testing-training.html" target="_blank"><strong>ETL Testing Course</strong></a></p>
  <p>When thinking about what you need to test to ensure data integrity, consider that BI is more than just data warehouses (DW) and extract, transform, and load (ETL).</p>
  <p>Services between the ETL processes, as well as the middleware and dashboard visualizations, also come under the purview of BI.</p>
  <p>Messages and negotiating pacts between these layers are complex and require considerable coordination and testing.</p>
  <p>DevOps helps facilitate this with constant deployments and testing. Implementing a DevOps testing approach to DW/BI means automating the testing of different source and target data sets to keep data current.</p>
  <p>This can be tremendously beneficial when handling many (possibly hundreds of) diverse data sources and volumes. Your team will be able to detect errors before they threaten BI applications in production.</p>
  <p>Moreover, you will have more time to fix issues before applications reach production. For more <a href="https://onlineitguru.com/etl-testing-training.html" target="_blank"><strong>ETL Testing Certification</strong></a></p>
  <p><strong>Why Test Automation?</strong></p>
  <p>Continuous quality is a systematic approach to process improvement in order to achieve the quality goals of development and the business it supports.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/SDIG2edLi</guid><link>https://teletype.in/@onlineprogramming/SDIG2edLi?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/SDIG2edLi?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Object-Relational Mapping (ORM) with ServiceNow Data Entities in Java</title><pubDate>Wed, 22 Apr 2020 09:28:12 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/15/33/15333b80-72d7-4206-a494-7afa3d563011.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/44/b6/44b6c3be-d09a-47a6-938d-25103492abd7.png"></img>Object-relational mapping (ORM) techniques make it easier to work with relational data sources and can bridge your logical business model with your physical storage model. Follow this tutorial to integrate connectivity to ServiceNow data into a Java-based ORM framework, Hibernate.]]></description><content:encoded><![CDATA[
  <p><em>Object-relational mapping (ORM) techniques make it easier to work with relational data sources and can bridge your logical business model with your physical storage model. Follow this tutorial to integrate connectivity to ServiceNow data into a Java-based ORM framework, Hibernate.</em></p>
  <p>You can use Hibernate to map object-oriented domain models to a traditional relational database. The tutorial below shows how to use the CData JDBC Driver for ServiceNow to generate an ORM of your ServiceNow repository with Hibernate.</p>
  <p>Though Eclipse is the IDE of choice for this article, the CData JDBC Driver for ServiceNow works in any product that supports the Java Runtime Environment. In the Knowledge Base you will find tutorials to connect to ServiceNow data from IntelliJ IDEA and NetBeans.</p>
  <h2>Install Hibernate</h2>
  <p>Follow the steps below to install the Hibernate plug-in in Eclipse.</p>
  <p>1.   In Eclipse, navigate to Help -&gt; Install New Software.</p>
  <p>2.   Enter &quot;http://download.jboss.org/jbosstools/neon/stable/updates/&quot; in the Work With box.</p>
  <p>3.   Enter &quot;Hibernate&quot; into the filter box.</p>
  <p>4.   Select Hibernate Tools. Learn more skills from <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servienow Training</strong></a></p>
  <figure class="m_original">
    <img src="https://teletype.in/files/44/b6/44b6c3be-d09a-47a6-938d-25103492abd7.png" width="911" />
  </figure>
  <h2>Start A New Project</h2>
  <p>Follow the steps below to add the driver JARs in a new project.</p>
  <p>1.   Create a new project. Select Java Project as your project type and click Next. Enter a project name and click Finish.</p>
  <p>2.   Right-click the project and click Properties. Click Java Build Path and then open the Libraries tab.</p>
  <p>3.   Click Add External JARs to add the cdata.jdbc.servicenow.jar library, located in the lib subfolder of the installation directory.</p>
  <h2>Add a Hibernate Configuration File</h2>
  <p>Follow the steps below to configure connection properties to ServiceNow data.</p>
  <p>1.   Right-click on the new project and select New -&gt; Hibernate -&gt; Hibernate Configuration File (cfg.xml).</p>
  <p>2.   Select src as the parent folder and click Next.</p>
  <p>3.   Input the following values:</p>
  <p>o    <strong>Hibernate version:</strong>: 5.2</p>
  <p>o    <strong>Database dialect</strong>: Derby</p>
  <p>o    <strong>Driver class</strong>: cdata.jdbc.servicenow.ServiceNowDriver</p>
  <p>o    <strong>Connection URL</strong>: A JDBC URL, starting with <em>jdbc:servicenow:</em> and followed by a semicolon-separated list of connection properties. More from Servicenow Certification</p>
  <p>ServiceNow uses the OAuth 2.0 authentication standard. To authenticate using OAuth, you will need to register an OAuth app with ServiceNow to obtain the OAuthClientId and OAuthClientSecret connection properties. In addition to the OAuth values, you will need to specify the Instance, Username, and Password connection properties.</p>
  <p>See the &quot;Getting Started&quot; chapter in the help documentation for a guide on connecting to ServiceNow.</p>
  <h4>Built-in Connection String Designer</h4>
  <p>For assistance in constructing the JDBC URL, use the connection string designer built into the ServiceNow JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.</p>
  <p>java -jar cdata.jdbc.servicenow.jar</p>
  <p>Fill in the connection properties and copy the connection string to the clipboard.</p>
  <figure class="m_original">
    <img src="https://teletype.in/files/eb/45/eb45e589-67ec-4561-b3a2-0c1b32b59e30.png" width="436" />
  </figure>
  <h2>Connect Hibernate to ServiceNow Data</h2>
  <p>Follow the steps below to select the configuration you created in the previous step.</p>
  <p>1.   Switch to the Hibernate Configurations perspective: Window -&gt; Open Perspective -&gt; Hibernate.</p>
  <p>2.   Right-click on the Hibernate Configurations panel and click Add Configuration.</p>
  <p>3.   Set the Hibernate version to 5.2.</p>
  <p>4.   Click the Browse button and select the project.</p>
  <p>5.   For the Configuration file field, click Setup -&gt; Use Existing and select the location of the hibernate.cfg.xml file (inside src folder in this demo).</p>
  <p>6.   In the Classpath tab, if there is nothing under User Entries, click Add External JARS and add the driver jar once more. Click OK once the configuration is done.</p>
  <p>7.   Expand the Database node of the newly created Hibernate configurations file.</p>
  <blockquote>Take your career into new heights with <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Developer Training</strong></a> </blockquote>
  <h2>Reverse Engineer ServiceNow Data</h2>
  <p>Follow the steps below to generate the reveng.xml configuration file. You will specify the tables you want to access as objects.</p>
  <p>1.   Switch back to the Package Explorer.</p>
  <p>2.   Right-click your project, select New -&gt; Hibernate -&gt; Hibernate Reverse Engineering File (reveng.xml). Click Next.</p>
  <p>3.   Select src as the parent folder and click Next.</p>
  <p>4.   In the Console configuration drop-down menu, select the Hibernate configuration file you created above and click Refresh.</p>
  <p>5.   Expand the node and choose the tables you want to reverse engineer. Click Finish when you are done.</p>
  <p> </p>
  <h2>Configure Hibernate to Run</h2>
  <p>Follow the steps below to generate plain old Java objects (POJO) for the ServiceNow tables.</p>
  <p>1.   From the menu bar, click Run -&gt; Hibernate Code Generation -&gt; Hibernate Code Generation Configurations.</p>
  <p>2.   In the Console configuration drop-down menu, select the Hibernate configuration file you created in the previous section. Click Browse by Output directory and select src.</p>
  <p>3.   Enable the Reverse Engineer from JDBC Connection checkbox. Click the Setup button, click Use Existing, and select the location of the hibernate.reveng.xml file (inside src folder in this demo).</p>
  <p>4.   In the Exporters tab, check Domain code (.java) and Hibernate XML Mappings (hbm.xml).</p>
  <p>5.   Click Run.</p>
  <p>One or more POJOs are created based on the reverse-engineering setting in the previous step.</p>
  <h2>Insert Mapping Tags</h2>
  <p>For each mapping you have generated, you will need to create a mapping tag in hibernate.cfg.xml to point Hibernate to your mapping resource. Open hibernate.cfg.xml and insert the mapping tags as so:</p>
  <p><code>&lt;hibernate-configuration&gt;</code></p>
  <p><code>&lt;session-factory</code> <code>name=&quot;&quot;&gt;</code></p>
  <p><code>&lt;property</code> <code>name=&quot;hibernate.connection.driver_class&quot;&gt;</code></p>
  <p><code>cdata.servicenow.ServiceNowDriver</code></p>
  <p><code>&lt;/property&gt;</code></p>
  <p><code>&lt;property</code> <code>name=&quot;hibernate.connection.url&quot;&gt;</code></p>
  <p><code>jdbc:servicenow:OAuthClientId=MyOAuthClientId;OAuthClientSecret=MyOAuthClientSecret;Username=MyUsername;Password=MyPassword;Instance=MyInstance;&lt;!--?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?--&gt;InitiateOAuth=GETANDREFRESH</code></p>
  <p><code>&lt;/property&gt;</code></p>
  <p><code>&lt;property</code> <code>name=&quot;hibernate.dialect&quot;&gt;</code></p>
  <p><code>org.hibernate.dialect.SQLServerDialect</code></p>
  <p><code>&lt;/property&gt;</code></p>
  <p> </p>
  <p><code>&lt;mapping</code> <code>resource=&quot;incident.hbm.xml&quot;&gt;&lt;/mapping&gt;</code></p>
  <p><code>&lt;/session-factory&gt;</code></p>
  <p><code>&lt;/hibernate-configuration&gt;</code></p>
  <h2>Execute SQL</h2>
  <p>Using the entity you created from the last step, you can now search ServiceNow data:</p>
  <p><code>import</code> <code>java.util.*;</code></p>
  <p><code>import</code> <code>org.hibernate.Session;</code></p>
  <p><code>import</code> <code>org.hibernate.cfg.Configuration;</code></p>
  <p><code>Session session =  new</code></p>
  <p><code>Configuration().configure().buildSessionFactory().openSession();</code></p>
  <p><code>String SELECT = &quot;FROM incident i WHERE category = :category&quot;;</code></p>
  <p><code>Query q = session.createQuery(SELECT, incident.class);</code></p>
  <p><code>q.setParameter(&quot;category&quot;,&quot;request&quot;);</code></p>
  <p><code>List&lt;incident&gt; resultList = (List&lt;incident&gt;) q.list();</code></p>
  <p> </p>
  <p><code>for(incident s: resultList){</code></p>
  <p><code>System.out.println(s.getsys_id());</code></p>
  <p><code>System.out.println(s.getpriority());</code></p>
  <p>To get in-depth knowledge, enroll for a live free demo on <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Online Training</strong></a></p>
  <p><code>}</code></p>
  <p><code>}</code></p>
  <p><code>}</code></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@onlineprogramming/xNMXWj-jk</guid><link>https://teletype.in/@onlineprogramming/xNMXWj-jk?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming</link><comments>https://teletype.in/@onlineprogramming/xNMXWj-jk?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=onlineprogramming#comments</comments><dc:creator>onlineprogramming</dc:creator><title>Servicenow IntegrationHub with types</title><pubDate>Tue, 21 Apr 2020 08:47:12 GMT</pubDate><media:content medium="image" url="https://teletype.in/files/ca/c6/cac6d2c8-32dc-4452-a694-11aa30517c99.png"></media:content><description><![CDATA[<img src="https://teletype.in/files/53/a4/53a4f97b-e7c0-4398-9862-40c5f6ba82d5.png"></img>ServiceNow IntegrationHub is a new Now Platform feature in the Kingston release.]]></description><content:encoded><![CDATA[
  <p>ServiceNow IntegrationHub is a new Now Platform feature in the Kingston release.</p>
  <p>Automate integration tasks using ServiceNow-built components for Flow Designer, or develop custom integrations. Requires a separate subscription.</p>
  <h2>IntegrationHub features</h2>
  <figure class="m_original">
    <img src="https://teletype.in/files/53/a4/53a4f97b-e7c0-4398-9862-40c5f6ba82d5.png" width="876" />
  </figure>
  <h1>ServiceNow eBonding spoke</h1>
  <p>The ServiceNow eBonding spoke demonstrates some common integration design patterns through a common use case of synchronizing incidents across ServiceNow instances.</p>
  <h2>Base system eBonding actions</h2>
  <p>If you have multiple production instances in your environment, you might have a need to synchronize data across these instances. For example, one instance might manage internal applications (your source system) and another manages external customer facing applications (your target system).</p>
  <p>A common use case is an incident which initially opens on the source system, but it requires a correlated incident to be created and tracked on the target instance. The ServiceNow eBonding integration contains the following OOB actions to assist in creating the synchronization: Get skills from <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Developer Training</strong></a></p>
  <p><strong>Create Remote Incident action</strong></p>
  <p>This action uses the source system incident details to create a new incident on the target instance. It passes the source incident number as the Correlation ID on the target system. It takes the target system incident number and updates the Correlation ID of the source system.</p>
  <p><strong>Lookup Remote Incident action</strong></p>
  <p>This action takes the remote incident number as input and retrieves more details about that incident, such as: short description, description, priority, etc.</p>
  <p><strong>Update Remote Incident action</strong></p>
  <p>This action uses the source incident details to update the remote incident with details from source instance. Look up of remote incident is performed using Correlation ID in source instance’s incident. Learn more from <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Certification</strong></a></p>
  <h2>Credential and Connection information for eBonding</h2>
  <p>When building actions, you must decouple the connection and credential information from the action so there is a seamless transition from distinct production environments. This also makes it easier to share content and create content through the ServiceNow store.</p>
  <p>As part of eBonding example, you can associate an OOB connection alias (sn<em>ebonding</em>ah.ServiceNow). Create an HTTP connection record and associate it with this alias. For credentials, ServiceNow web services supports a multitude of authentication mechanisms. For more details <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Online Training</strong></a></p>
  <p>Create a BasicAuth credential to start with, however, that login ID must have permissions to create, read, and update an incident on your remote system. An example HTTP connection:</p>
  <h2>Additional eBonding Script</h2>
  <p>The Payloadbuilder script is included with this example. It builds a payload by reading a set of fields from an incident table used in the REST steps for these actions.</p>
  <p><strong>Slack spoke</strong></p>
  <p>The Slack spoke provides actions which post messages and ServiceNow incident, problem, and change record details to Slack channels. For more <a href="https://onlineitguru.com/servicenow-online-training.html" target="_blank"><strong>Servicenow Training</strong></a></p>
  <h2>Slack actions</h2>
  <p>The Slack spoke is an available integration through IntegrationHub. You can specify the following Slack actions within a flow.</p>

]]></content:encoded></item></channel></rss>