DronaBlog

Sunday, June 27, 2021

What is Common Table Expression (CTE) in Snowflake?

    Are you working on Snowflake technology and would like to understand Common Table Expression also known as CTE? If so, then you reached the right place. In this article, we will understand what is Common Table Expression. We will also explore how to use CTE in queries.

A) What is Common Table Expression i.e CTE?

      A Common Table Expression (CTE) is a named subquery. It is defined in a WITH Clause. It is equivalent to a temporary view.

      The  CTE Contains an optional list of columns and a SELECT Statment. The output of CTE is a table with a column defined.





B) What is the syntax for CTE?

     The syntax for CTE is as below

      WITH

            TEMP_CTE (COL_1 ,COL_2) AS

              (SELECT col_nm1 , col_nm2 From TABLE)

     SELECT COL_1 , COL_2 FROM TEMP_CTE

C) What is recursive CTE?

      A recursive CTE is a CTE that references itself. In a recursive CTE, we can join Table to itself as many times as needed.


D) Benefits of CTE?

      1) CTE increases the modularity of the SQL programs

       2) CTE is helpful for simplified maintenance

       3) Recursive CTE can be used to process hierarchical data in the table.





E) Naming conventions for CTE?

      We need to avoid CTE names that match with database views or table the reason for this is - if we define query with CTE names then CTE takes precedence over table or view names which will produce unexpected results.

         For more details about snowflake refer to this video  



Tuesday, June 22, 2021

When Tokenization job gets executed in Informatica MDM?

Are you looking for the details about when the tokenization job executed in Information? Would you be interested in knowing the relationship between a match job and a tokenization job? If so, then you reached the right place. In this article we will explore the instances during which tokenization job gets triggered:

A. Why we need a tokenization process in MDM?
          Tokenization is a process that generates the token-based fuzzy match key and other match column configuration. The tokens generated are stored in the table <BO> STRP. This STRP table is used as input to match the process. The match process matches the records based on these tokens.





B. When the tokenization process executed
           The tokenization process gets triggered at various points and These are:
          1)  Manual tokenization execution
          2)  During the match process
          3)  During the load process

1) Manual tokenization job execution
       We can execute the tokenization job in Informatica MDM manually whenever it is needed. It is recommended that tokenization need to executed separately instead of in conjunction with load or match job.





2)  Tokenization process as part of match job
       If it is required to update the match token, the match process in Informatica automatically starts the match job's tokenization process. This scenario may happen if new records have been added or updated existing records.


3) Tokenization process during load job.
         If we enable the `Generate Match Tokens on Load' property on the base object then, when records loaded in the base object at that time the tokenization job will be triggered. It is not recommended to enable this property as it will adversely impact load job.






Learn more about the tokenization process here-


Friday, June 18, 2021

How to Use LIKE operator in Snowflake?

   Are you looking for details of what is collate in Snowflake? Are you also interested in knowing what is the purpose of collate and how to use it? If so, then you reached the right place. We will understand how to use collate to achieve LIKE functionality in Snowflake. In this article, we will more about collate or collation in Snowflake.




A) What is collate?

          The collate function in Snowflake allows specifying alternative rules for comparing strings.

B) What is the purpose of collate in Snowflake?

        The collate function in Snowflake is used to compare and sort the data. The comparison and sorting will be based on a particular language or other user-specified rules.

         The text strings in Snowflake are stored using UTF-8 character set. Comparing based on Unicode will not provide the desired output because of the following reasons :

          1. The special character in a language does no sort based on the language standards.

          2. In case we would like to achieve sorting based on special rules .e.g case insensitive sort.


C) What type of rules can be used with collate in Snowflake

     Here is a list of rules that can be used with collate 

     1. Different character sets for different language

     2. To achieve case insensitive comparisons 

    3.  Accent sensitivity e.g a,á,ä

    4 . Punctuation sensitivity e.g P-Q-R and PQR

    5. Sorting based on the first letter in the strings.

    6. Trimming leading and trailing spaces and then sorting

     7. Other options can be implemented based on business needs.





D) Where to use collate in Snowflake SQL?

       1. Simple comparison in where clause

              WHERE  FIELD1= FIELD2

        2. Join condition

              ON  EMP. EMP_NM =MANAGER.MNGR_NM

         3. Sorting condition

               ORDER BY FIELD 1

         4. Aggregation condition

                GROUP BY

          5. Aggregate functions

                  MAX ( FIELD1)

          6. Scaler functions

                 LEAST (FIELD1, FIELD2, FIELD3)

         7. Data clustering conditions

                CLUSTER BY (FIELD1)

     There are other several usages of collate in SQL, however above mentioned are commonly used.


E) How to use collate with LIKE operator

      here is an example of collate using like operation

     SELECT * FROM EMP WHERE COLLATE (NAME,") Like%ABC%


Learn more about snowflake here -



Wednesday, May 19, 2021

Things to consider while configuring snowflake account - part 2

In the previous article on Things to consider while configuring Snowflake Account - part I, we understood two important factors i.e Data retention and timezone. In this article, we will see security, cost savings, and connection performance. Let's start.






A] Security -

        The security requirement for Snowflake has two parts.

a) Storage integration

        We should not create external stages without storage integration. A Storage integration is secure means of creating connectivity between snowflake and cloud. storage provider. We need to set the below properties to `true ' so that we can prevent the exposure of access tokens or secret keys to snowflake users, the properties are 

  •  REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_CREATION 
  •  REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION

b) Network Policies.

        It is important to prevent access from unwanted networks to the snowflake account. The network policies help to define a list of valid network locations for user connection.

     We can configure account level and user-level network policies.

        NETWORK_POLICY is a parameter associated with network policies.





B] Connection Performance

      In order achieve better connection performance we need to set CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX to value `TRUE'. This helps to reduce the amount of information used for JDBC and ODBC connections.


C] Cost Savings.

      Cost  of Snowflake implementation is dependent upon two factors 1) Users  2) Warehouses

1) User: In the snowflake account there are system users and human users. Set different configurations for each of these. Below are helpful parameters for users - 

  •  LOCK_TIMEOUT
  •  STATEMENT_TIMEOUT_IN_SECONDS
  •  STATEMENT_QUEUED_TIMEOUT_IN_SECONDS


2) Warehouse: The optimum use of Snowflake warehouses can be achieved by resource monitoring, setting the appropriate warehouse size, and setting proper cluster count

      The mentioned below are parameters of Warehouse

  • RESOURCE MONITOR
  • AUTO_SUSPEND 
  • WAREHOUSE_ SIZE 
  •  MIN_CLUSTER_COUNT


 Learn more about snowflake here-



Informatica MDM match job and Tokenization job

       Are you interested in knowing what is the relationship between the Informatica MDM match and tokenization job? Would you also like to know what all conditions needed in order to tokenization job as part of match job? If so, then you reached the right place. In this article on Informatica MDM, we will learn match and tokenization job relationship.

A] What is the tokenization job in Informatica MDM?

           Tokenization is a process in Informatica during which match columns are used to generate tokens. These are based on the type of fuzzy match key column, another match column, and type of data present in the system 

          An example of the token is X01Z530K. It is an alphanumeric value for a given record.





B] What is a match job in Informatica MDM?

      By using the match process the record is matched in order to consolidate the record from various source systems. The match process uses tokens that are generated as part of the tokenization job.

       Once records are matched then those are stored in <BO Table>_MTCH table. The merge or consolidation process uses the MTCH table to consolidate the records.

C] Dose the match job always generate Match tokens?

        The answer is No. The match job will not always generate match tokens automatically. There various factors as part of a match job.





D]  What are factors are needed in order to generate tokens as part of the match job automatically?

         The following conditions are required to be there in order to generate a match token when we execute the match job.

     1. The base object should be fuzzy match enabled.

     2. The important ' Fuzzy Match Key ' column must be defined in match columns.

     3. Match ruleset should have at least on Fuzzy match rule.

     4. If generate Match Tokens job already executed then match job will not generate tokens.

     5. If the value of Dirty Indicator column value is 0 ( zero) for all the records then match job will not generate tokens

More details about the tokenization job are explained in the video -







Sunday, May 9, 2021

Understanding is BPEL Process Informatica MDM

Are you looking for what is BPEL Process? Are you also would like to know what types of files associated with BPEL Process? Would you be also interested in knowing various activities available in BPEL Process? If so, then you reached the right place. In this article, we will explore BPEL Process in detail.




1. What is  BPEL Process?

        Before understanding BPEL Process, we need to understand SOA. The Service-Oriented Architecture aka SOA is an architectural approach using which we can build business processes as reusable services. In order to build programs in SOA  architectural BPEL i.e Business Process Engineering Language is used. In short, BPEL is a programming language.

2. Type of files in BPEL Process

       Here is a list of file used in the BPEL Processing :

     a)  bpel - This type of file contains the set of activities added to the process.

     b) wsdl - This type of file describes the services and references for the BPEL Process service component.

     c)  xsd - This type of file provides definitions for types of fields used in the services components.

     d) xml - This type of file gives details based on XML programming standards.





3 . Understanding BPEL Activities

     BPEL program contains a set of activities. In this section, we will understand various types of activities involved in the BPEL Process.



    a) Service Task - The service task is used to call web service in BPEL Process.

    b) User Task - The task need to be performed by the user is can be configured as a user task.

    c) Script Task - In order to perform script programming then script task is used. e.g. checking the value of the parameter.

   d) Business Rule Task - In order to establish a business rule in BPEL,  the Business Rule Task is used.

   e) Suspend Activity - The Suspend activity is used to suspend the flow.

   f) Validation Activity - The validation which are required in BPEL Process can be achieved with validation Activity.

   g) Abstract Activity - The interface can be designed and defined using abstract activity.

   h) Manual Activity - Manual activities such as approving, rejecting can be achieved with manual activity.

   i) Send Task - In order to sent an event from one flow to another send task is used.

   j) Receive Task - To start the flow or to receive an event from another process to the current process, the Receive task is used.


Learn more about Informatica Active VOS here -




Sunday, May 2, 2021

How to convert Dos type file to Unix Type file

Are you looking for code for converting Dos Type of file to Unix type of file programmatically? Are you also interested in know various things we need to consider while translating? If so, then you reached the place. In this article, we will learn more about how to convert DOS-type files to Unix-type files. So start Dos2Unix 






Introduction




When we create a file on Windows system i.e. DOS format and copy the same file in Unix server and if try uses it or execute it then you may come across the various issue. One of the issues is the file will not able to process itself. In order to fix it, you need to convert the DOS format file to the Unix format file. In the next section, we will see the program to do it.





About Program

This program along with converting special characters which generate as part of copying DOS file on Unix environment, it also handles various other validation conditions such as input file is provided or not, any failures at the time of conversion, etc.


Program to convert DOS to Unix

#!/bin/bash
# Description: This script is used to convert unix type of file to dos type of file.
# It is mainly used to handle line ending problems.
#

# Check whether file is provided in the input
if [ -z $1 ]
then
 echo "ERR_0001 : The input file is not passed to process"
 exit
fi
if [ "$1" = "-n" ]
then
 if [ -z "$2" ]
 then
  echo "ERR_0002: Invalid - Parameter 2 can not be empty - [ $0 $@ ]"
  exit
 fi
 if [ -z "$3" ]
 then
  echo "ERR_0003: Invalid - Parameter 3 can not be empty - [ $0 $@ ]"
  exit
 fi
 
 # Parameter 2 and 3 can be same.
 if [ "$2" = "$3" ]
 then
  dos2unix "$2" >/drona/techno/world/null 2>&1
  tmpfile=/tmp/myu2d1.$$
  sed 's/$/\r/' <"$2" >$tmpfile 2>/drona/techno/world/null
  if [ $? != 0 ]
  then
   echo "ERR_0004: Failed - Process failed to Convert from unix to dos type - sed Exiting..."
   rm -f $tmpfile
   exit 1
  fi
  cp $tmpfile "$2"
  if [ $? != 0 ]
  then
   echo "ERR_0005: Failed - Process Failed to Convert from unix to dos type - cp. Exiting..."
   rm -f $tmpfile
   exit 1
  fi
  rm -f $tmpfile
  exit
 fi
 
 # Input parameters are good, process 2 into 3
 echo "INFO_0001: Converting file $2 into file $3"
 dos2unix "$2" >/drona/techno/world/null 2>&1
 sed 's/$/\r/' <"$2" >"$3" 2>/drona/techno/world/null
 if [ $? != 0 ]
 then
  echo "ERR_0006: Failed - Process failed to Convert unix to dos type - sed Exiting..."
  exit 1
 fi
 exit
fi

# Input parameters are good - process all files now
tmpfile=/tmp/myu2d2.$$
for ftop in "$@"
do
 # first make sure its not a dos file.
 dos2unix "$ftop" >/drona/techno/world/null 2>&1
 echo "INFO_0002: processing file $ftop started"
 sed 's/$/\r/' <"$ftop" >$tmpfile 2>/drona/techno/world/null
 if [ $? != 0 ]
 then
  echo "ERR_0006: Failed - Process failed to Convert - sed Exiting..."
  rm -f $tmpfile
  exit 1
 fi
 cp $tmpfile "$ftop"
 if [ $? != 0 ]
 then
  echo "ERR_0007: Failed - Process failed to Convert - cp. Exiting..."
  rm -f $tmpfile
  exit 1
 fi
 rm -f $tmpfile
 echo "INFO_0003: processing file $ftop completed"
done





Dynatrace : An Overview

  Dynatrace, a leading provider of software intelligence, offers a powerful platform designed to monitor, analyze, and optimize the performa...