DronaBlog

Friday, October 22, 2021

What are differences between multimerge and merge API in Informatica MDM

                Are you interested in knowing what is the use of multimerge and merge APIs? Are you also would like to know the difference between merge and multimerge API? If so, then you reached the right place. In this article, we will learn about these APIs in detail.


A) What is Multimerge API? 

                 The Multimerge API is used to merge the list of records together. Multimerge is the generic form of merge API.






B) What is Merge API? 

                The merge API is used to merge two base object records that are identified as the same base object record.


C) What are the differences between Multimerge and Merge API? 

          1) Number of records to merge : 

              a) Merge API allows only two records to merge 

              b) Multimerge API allows more than two records to merge.

         2) Parameters to request : 

             a) Merge API accepts source Record key and targetRecord key as parameters in the input

             b) Multimerge API accepts multiple record key lists as parameters in the request.





         3) Consolidated records : 

             a) Merge API allows records irrespective of the value of consolidation indicator 

             b) Multimerge API allows merging of unconsolidated records only i.e. consolidation indicator                   =1

         4) Final value for consolidation indicator : 

            a) The final value for consolidation indicator after performing merge API operation is 1 i.e                           consolidated state  

            b) Multimerge API does not change consolidation indicator value for surviving records.

        5) Surviving Record : 

             a) The surviving record is specified in merge API with targetRecordkey as the parameter.

             b) For Multimerge API, the surviving record will be determined based on survivorship rules of the XREF that are participating in the merge process.


                 Learn more about Informatica MDM survivorship rules here 



   

Saturday, October 16, 2021

What is Time Travel in Snowflake ?

                        Are you looking for details about Time Travel in Snowflake? Are you also interested in knowing what are tasks we can perform using Time travel feature? If so, then you reached the right place. In this article, we will learn one of the powerful features is Snowflake.


A) What is Time Travel in Snowflake

                        The feature by which we can access historical data at any point within a specified period is called Time Travel in snowflake we can access data not only changed but deleted as well.


B) What are the tasks that can be performed using Time travel in Snowflake?

                      The tasks below can be effectively performed by using Time Travel Feature 

             1.  Backing up the data from key points in the past.

             2. Duplicating the data from key points in the past.

            3. Restoring tables, schemes, and databases if those are accidentally deleted.






C) What is Data Protection Lifecycle? 

                  In snowflake, there are three-phase of the data protection lifecycles. 

           1. Current Data Storage: on the current data set we can perform standard operations such as DML , DDL etc.

           2. Time Travel Retention: The normal retention period is 1 to 90 days. Here is the list of operations allowed with time travel.

          a) SELECT .... AT| BEFORE ...

          b) CLONE ... AT|BEFORE ...

          C) UNDROP...

          3. Fail safe: This is the last phase in Data Protection Lifecycle. This can only be performed by snowflake No user operations are allowed.


D) Data  Retention Period in snowflake 

                In snowflake, Data Retention Period is a key component for Time Travel. The Data retention period specifies the period or number of days we can preserve data. Snowflake Preserves the state of data before update /delete/drop. 





               For Snowflake Standard  Edition, the Data Retention period is one day.

              For Snowflake Enterprise  Edition, Data Retention Period between 0 to 90 days.


            Learn more about Snowflake here -



           

Tuesday, October 12, 2021

What are new feature in Java -17 part 2

                Are you interested in knowing what are the new featured introduced in Java 17? Are you also interested in knowing what are the deprecated features in Java 17? If so, then reached the right place. This is the second part of the feature in java 17. You can access the first part of the features of Java 17 here.

A) Introduction 

              In the previous article, we explored the Java 17 features such as JEP 412: Memory API and Foreign Function, JEP 411: Deprecate the Security  Manager, JEP 414: Vector API, JEP 415: Deserialization Filters.

             In this article, we will focus on the features below in Java 17 

           1. JEP 409: Sealed classes 

           2. JEP 406: Pattern Matching for switch 

           3. JEP 403: Strongly Encapsulate JDK internals 

           4. JEP 398: Deprecate Applet API for removal 





B ) JEP 409: Sealed classes 

            A sealed class that restricts other classes may extend it. This also applies to interface as well i.e a sealed class can be an interface that restricts another interface may extend it. 

           With Java 17, new sealed, non-sealed character sequences are introduced and it allows them as contextual keywords.






C ) JEP 406: Pattern Matching for switch 

             With this change, all existing expressions and statements compile with identical semantics. It performs then without any modification.

              There are two new patterns are introduced 

          1. Guarded Pattern: It is used to refine the pattern matching logic using a boolean expression 

          2. Parenthesized Pattern: It is used to get rid of parsing ambiguities 


D ) JEP 403: Strongly Encapsulate JDK internals 

             All the internal elements of JPK are strongly encapsulated. Here only exception is sun.misc.unsafe.


E ) JEP 398: Deprecate Applet API for Removal 

              As we know Applet APIs were deprecated since Java 9 but these were never removed. With Java 17, these will be removed there not be much impact because these Applet APIs are no longer in use as we use more advanced web technologies for it.


                    Learn more about Java here -

            


Sunday, October 10, 2021

What are new Features in Java 17 - Part 1

               Are you looking for detailed information about all the interesting features introduced in JDK 17? Are you also would like various terminologies such as LTS or JEP? If so, then you reached the right place. In this article, we will explore new features in JDK 17 release.

A) What is LTS in Java?

               LTS is an abbreviation for Long-term Support. It is a product life cycle management policy. With this policy, the software edition is supported longer than the software standard edition.






B) What is JEP in Java? 

               JEP is an abbreviation for JDK Enhancement Proposal. Oracle Corporation has drafted this process to collect proposals for enhancements to the Java Development kit i.e. JDK. 


C)  What are the new features in Java 17? 

                Java 17 is one of the major releases and comes with various interesting features. In this article we will explore the features below :

          1. JEP 411: Deprecate the security manager 

         2. JEP 412: Memory API and Foreign Function 

         3. JEP 414: Vector API

        4. JEP 415: Deserialization Filters


1 . JEP 411: Deprecate the security manager 

                  The security manager API which was used to define security policy for an Application is deprecated with JDK 17  release. The security manager is deprecated as this API is not commonly used. one of the basic features of a security manager is a blocking system:: exit. If applications continue to use the security manager then an alert message will be issued.





2. JEP 412: Memory API and Foreign function

                   With JEP 412 the new API is introduced and these are Foreign Memory Access API and the foreign linker API with these API'S we can invoke code outside of the JVM and also security access foreign memory Here, foreign memory means the memory which is not handled by JVM. 

3. JEP 414: Vector API 

                    These Vector APIs are part of JDK 16 are also enhanced in JDK 17 to express vector computations on supported CPU architecture at runtime. These are reliable for compilation and performance on AArch 64 and x64 architectures.

4. JEP 415: Deserialization Filters 

                    With a JVM-wide filter factory, we can allow applications to configure context-specific and dynamically selected deserialization filters. This will be helpful to prevent serialization attacks.


                      Learn more about Java here -



Tuesday, October 5, 2021

How to monitor Errors in the Alert logs in Oracle Database?

                Are you looking for details about monitoring Errors in the Alert log?  Would you also like to know about ORA  errors such as ORA-7445 , ORA-1653 ,ORA-1650 etc? If so, then you reached the right place. In this article, we will understand monitoring Errors in the Alert logs.


A) What are Alert logs?

                The important information about error messages and exceptions which occurs during various operation database is captured in the log file called Alert logs.

                 Each Oracle database for windows instance has one alert log.






B) What is the location of Alert logs? 

                 We set the path for DIAGNOSTIC_DEST initialization parameter. At this path location, the alert log file is created. Normally, the alert file name is alert _SID.log


C) Database crash errors 

                  These errors are associated with an error that can be severe enough to crash an oracle instance. To analyze the oracle instance crash issue we need to capture a trace file or a core dump file and sent it to oracle technical support.


D) ORA - 600 Errors 

                  The ORA-600 will not crash the oracle database. However, it may produce a core dump or trace file - 

           Example of trace file -

                   Errors in file /ora/home/dba/oracle/product/rdbms/log ora_ 123.trc

                   ORA-00600= internal error code , arguments : [12700],[12345],[61],[ ],[ ]....






E) ORA-1578

                 If a data block is read that appears corrupt in such case ORA-1578 is returned. This error message provides details of the file and block number.

                  e.g. 

                ORA-D1578 ORACLE data block corrupted (file#xyz, block#01)


F) ORA-1650 

                It is an error message related to the rollback segment. The error message 'ORA-1650 cannot extend rollback segment ' is produced when the rollback segment has become full. The oracle instance will not crash but the task will be terminated.

                e.g.  ORA-1650 is unable to extend rollback segment PQR by 64000 in tablespace ROLLBACK.

             Based on the above critical error messages we can build the monitoring system. 



Learn more about oracle here -







Sunday, October 3, 2021

How to handle NULL values in Snowflake

                   Are you facing challenges while handling NULL values in snowflake? Are you also interested in knowing what are the things we need to consider while columns in the query contain NULL values? In this article, we will learn details about handling NULL values.


A) What are NULL values?

                   Many times NULL value is referred to as no value. some systems do not have NULL concepts. In technical terms, the NULL value is a reference to an empty area of the memory. some systems handle differently than others. Snowflake has its own way of handling NULL values.






B) What are the rules for handling NULL values in Snowflake?

                  Here is the list of rules  -

             1. Not null to null value comparison :

                   we compare not null value with the null value it returns NULL

                   e.g  'ABC ' =NULL returns NULL

             2. Equality or inequality comparison :

                   If we use inequality comparisons such as less than or greater than it results in a NULL                              value.

                   e.g 'ABC' > NULL returns NULL

             3. Comparision of NULL values :

                   If we compare one NULL value with another NULL value then it results in a NULL value.

                   e,g . NULL= NULL results NULL

             4. Best way of comparing NULL values is 

                     IS NULL or IS NOT NULL

             5. Function to null safe equality 

                  We can use the EQUAL_NULL function to check NULL safe equality.

                  e.g. equal_null ('ABC' , null) will return false.





             6. Aggregate function dismisses NULL values.

                  Assume that one of the fields in the snowflake contains the below values -

                  1,2, NULL , 3, NULL 6, 1

                 If we use aggregate function 4 AVG on this field then it will return the result as                                              3 i.e(1+2+3+6)/4

                 i.e. It ignores NULL values.

             7. Real average value 

                  In case we need a real average in the above example we need to use the SUM function, where the NULL value is 0

                  i.e. SUM(<value>)/count (*)

                 e.g. sum(1+2+0+3+0+6)/6

                  i.e. 12/6 = 2

             8. The empty string in snowflake is NOT NULL

                 e.g. "IS NULL will returns false.

             9. Count (*) returns the total count of rows in the table 

            10. Count (<column_name>) returns the count of rows with not null values only


                                  Learn more about snowflake here -



Tuesday, September 28, 2021

What are the components of snowflake architecture ?

                Are you looking for an article on snowflake architecture? Are you also looking for the components of snowflake architecture? If so, then you reached the right place. In this article, we will explore database storage, Query processing, and cloud services in detail.

A) What is the Architecture of Snowflake?

               Snowflake architecture is a hybrid of a shared-nothing database and shared disk.

       1. Snowflake uses a central data repository which is similar to shared-disk architecture.

       2. Snowflake processes queries using massively parallel processing compute clusters. In this kind of architecture each node in the cluster, stores a portion of the entire data set. This is similar to shared-nothing architecture.





B) What are the components of Snowflake Architecture?

              The components of snowflake architecture are as below 

        1. Database Storage

        2. Query Processing 

        3. Cloud Services 

                 Let's understand each of these components one by one

1. Cloud  Services 

               It is the topmost layer in snowflake architecture. It consists of a collection of services that coordinates various activities across the Snowflake platform. The cloud services join various components of a snowflake in order to fulfill requests such as login or giving a response back to the user.

               Here is the list of services that are handled in this layer.

         1. Authentication 

         2. Infrastructure Management 

         3. Metadata Management 

         4. Query Parsing 

         5. Query optimization 

         6. Access Control

2. Query Processing 

               In this layer, query execution is handled. It is the most common and widely used component of the snowflake.  The queries are processed using a virtual warehouse. Each virtual warehouse is massively parallel processing compute cluster. It consists of multiple compute nodes provided by snowflake from the cloud provider.





3. Database Storage 

              It is cloud storage where optimized data is stored. What is optimized data? The optimized data is nothing but the data which is reorganized by snowflake into the compressed and columnar format.

             What are aspects handled by snowflake related to data? here is a list which is taken care of by snowflake 

         1. File Size 

         2. Structure of the data 

         3. Compression of the data 

         4. Metadata

         5. Statistics of the data

         6. Organization of the data


 The important thing here is the data stored by snowflake is not visible or accessible directly by customers. It can only be accessed using SQL query operations.


                 Learn more about snowflake here 



Tuesday, September 21, 2021

What are application of Artificial Intelligence?

           Are you looking for an article that lists currently available applications which use Artificial Intelligence? If so, then you reached the right place. In this article, we will explore the applications which leverage the benefits of Artificial Intelligence.


1) Google

          Google has a predictive search engine that predicts the next word when a user types a keyword to search on the Google page. This recommendation suggested by Google search is one of the best examples of Artificial Intelligence aka AI. It uses predictive analysis to achieve it.






2) JP Morgan chase's contact Intelligence platform

          Artificial intelligence, machine learning, and image recognition is used to implement JP Morgan chase's contact Intelligence platform to analyze legal documents. This system is very efficient compared to the manual review of each and every legal document.


3) IBM Watson 

              It is another Implementation of AI. IBM Watson technology is used by Healthcare organizations for medical diagnosis.


4) Google Eye Doctor

              The condition called diabetic retinopathy which can cause blindness can be diagnosed by using this AI-based technology named Google Eye Doctor


5) Facebook 

              It is one of the social media platforms which uses artificial Intelligence for face verification. Internally it uses machine learning and deep learning to detect facial features and tag friends.






6) Twitter

              Twitter uses Artificial Intelligence to detect hate speeches and terroristic languages in the twits.


7) Siri or Alexa

               these virtual assistance devices use Artificial intelligence for speech recognition.


8)  Tesla 

               Now a day we hear the buzzword autonomous driving or self-driving cars. Tesla is the leader in it Tesla uses computer vision, image recognization, deep learning in order to build smart cars. Which detects obstacles and drive around them without human interaction.


              Learn more about Artificial Intelligence here




Monday, September 6, 2021

Types of Artificial intelligence

         Are you looking for an article on what are types of artificial intelligence? Are you also interested in knowing what are stages of artificial intelligence are? If so then you reached the right place. In this article, we will focus on various types of artificial intelligence.


A) what is Artificial Intelligence or AI?

          The System which is capable of performing tasks that require normally human intelligence e.g decision making, object detection, complex problem solving, etc is called an Artificial Intelligence system, and capability with which it performs is called Artificial Intelligence.






B) Stages of Artificial Intelligence

           The stages of artificial intelligence are 

       1. An Artificial Narrow Intelligence (Weak AI) 

            Artificial narrow Intelligence is also called weak  AI. It is a stage of AI that involves machines that can perform specific tasks.

            e.g Alexa or Siri in iPhone

       2. Artificial General Intelligence

              Artificial General Intelligence is also known as stage AI. In this stage, the machine will possess the ability to think and make decisions.

               There is no implementation of strong AI yet.

        3. Artificial super Intelligence

               It is a stage of AI when the capability of computers will surpass human beings.

               This is still considered a hypothetical situation.






C) Types of Artificial Intelligence

            The types of Artificial Intelligence are as below-

          1. Reactive machine AI :

                   In this, machine operators solely based on the taking current situation and data 

                   e.g IBM chess machine - Deep Blue

            2. Limited memory AI 

                    In this, the machine uses post data and its memory to make informed and improved decisions.

                     e.g self-driving car - Tesla car

               3. Theory of mind AI 

                       In this, the human believes and thoughts can be comprehended by considering emotional intelligence in this type of artificial intelligence.

                 4. Self-aware AI 

                          In this, the machines will have their own consciousness and become self-aware. This type of AI does not exist yet.


           Learn more about Artificial Intelligence and data science here




Sunday, September 5, 2021

How to design landing table in the Informatica MDM ?

          Are you planning to implement Informatica MDM in your project and starting designing a landing table? Are you also interested in knowing the types of landing table designs? If so, then you have reached the right place. In this article, we will explore factors that need to consider while designing a landing table in Informatica MDM.


A) what is the landing table in Informatica MDM?

             Are landing tables are the tables where data from the source is loaded in order to process the data and sent through a stage process to cleanse and standardize it. For the stage process, the landing tables act as source and stage tables as targets.






B) Factors to be considered for landing table design

               We need to consider the following factors

While designing landing tables in MDM

             1. How many source systems are involved

             2. What is the volume from each source system

             3. Impact of development timelines

             4. Maintenance requirements

             5. Partition requirements


C) Type of landing table designs

             Based on the Information capture in the previous section, we can design landing tables in two ways

             1. One landing table for each source

             2. One landing table for multiple sources





1. One landing table for each source

             If each source is having different types of data ( e.g.one source is customer-centric and another source is Account centric), if the volume in each source is almost equal, or if we have good development and maintenance bandwidth then we can design one landing table for each source.









2. One landing table for multiple sources

             If multiple sources are having similar attributes and data types and volume in each source system is low and if we need to expertise the development time then we can design one landing table for multiple source systems. 



                                 



Learn more about MDM landing table here







Sunday, August 29, 2021

What is difference between TRUNCATE and DELETE

         Are you working on databases or learning the database concepts, and want to know about the basic commands related to table like DELETE, TRUNCATE, then this is the right place. In this article, we are going to learn what is the meaning of these commands and the purpose of using these. TRUNCATE and DELETE command does the same job but slightly in a different manner. In this article, we will see what is the difference between the DELETE and TRUNCATE commands.






TRUNCATE command

- TRUNCATE command is DDL (Data Definition Language) command.

- If you use truncate command, then it will delete the data in a table and not a table itself.

- TRUNCATE locks the table but does not locks the rows, as it removes all the data from the table.

- It removes all the rows of the table, but the structure of the table remains as it is. It does not delete the   

   table structure, columns, indexes and constrains.

- This command does not require where clause.

- After truncate operation rollback process is not possible because this command does not maintain any log file from which we can rollback the data.

                                  Syntax –

                                TRUNCATE TABLE table_name;

                                 Example –

                                 TRUNCATE TABLE Customer;

DELETE command

-                           DELETE command is DML(Data Manipulation Language) command.

-                          DELETE command deletes all the records from a table.

-                          DELETE locks the rows, for deletion each row in the table is locked.

-                         If we want to delete a specific row/record from a table then we can use the WHERE clause.

-                       The table structure, indexes, attributes will not be deleted after the DELETE operation.

-                        Rollback operation can be possible after DELETE operation but we have to rollback the data before the COMMIT statement. After the COMMIT statement we can not rollback the data.

 

·                                                                     Syntax –

                         DELETE FROM table_name;

                                     Example -

                         DELETE FROM Customer;

 

                                    Syntax (Using WHERE clause)-

                        DELETE FROM table_name WHERE condition;

                                    Example-

                                    DELETE FROM Customer WHERE id=1;

 The above delete operation will delete all the records from the ‘Customer’ table without deleting the ‘Customer’ table itself.





Difference between TRUNCATE and DELETE command

-                   DELETE command deletes the specific commands based on conditions defined by the WHERE clause,        but it does not free up the space.

-                 TRUNCATE command does not require a WHERE clause. After executing the TRUNCATE statement it releases all the memory along with the removal of data.

-                DELETE command maintains the log but the TRUNCATE command does not maintain the logs, hence the TRUNCATE command is faster than the DELETE command.

-               TRUNCATE command uses less Transaction space.





 

 

Tuesday, August 24, 2021

What is data science and use of it ?

           Are you looking for details of science? Are you also interested in knowing what is the use of Data Science in the real world? If so, then you reached the right place. In this article, we will explore Data science and we will also learn about predictive analytics and prescriptive analytics.






A) What is Data Science? 

              Data science is a blend of computer science, Business domain knowledge, and mathematics & statistics. It also includes machine learning, data analytics, and advanced analytics Data science is used to discover various patterns present in the data.


B) What is the difference between data statistics and data science?

                 Data statistics or data analytics includes business administration and exploratory data analysis. On the other hand, data science includes data product engineering, machine learning, advanced algorithms along with exploratory data analysis. In short, data statistics explains what is going on by using data history. However, data science also explains what is going on along with identifying future events by using machine learning.

                Data science uses predictive casual analytics, prescriptive analytics and machine learning.

C) What is predictive causal analytics?

                Predictive causal analytics is used to build a model that can predict the possibilities of a particular event in the future for example if you are a banker and giving a loan to the customer and would like to know the probability of a customer making a loan payment on time. Here, we can develop a model that can perform predictive analytics over a period of time.


D) What is prescriptive analytics?

                Prescriptive analytics is used to build a model that has the intelligence of taking decisions and also the ability to modify itself based on dynamic parameters. For example, Tesla's self-driving car collects data for the history of driving of thousands of miles by a different scenario like signal light, turns etc. By applying intelligence it will enable the car to take decisions like when to take turns etc.






E) Machine learning

                  Machine learning is a method of analyzing is used for making predictions. Machine learning is also used for pattern discovery.

                 Various tools using machine learning to enhance capabilities such as   


      

Saturday, August 21, 2021

What are the log Files in Informatica MDM ?

        Are you trying to analyze the issue in Informatica MDM? Are you looking for the details of the log files which are generated during various Processes in the Informatica MDM? This article will explore more about the log files, their locations, and when to use those.





A) Introduction

         Informatica MDM has various components such as application server, database, business process management tool, Application user interface such as Entity 360 or customer 360. Each of these components generates logs throughout its processing.

         Here we will understand various types of log files and these are

         1. Hub server logs

         2. Process or cleanse server logs

         3. E360 logs

         4. Provisioning logs

         5. Post Installation logs

         6. Elastic search logs

         7. Application server logs

         8. Database logs

1. Hub server logs

         Informatica MDM has two core components: hub server and process server earlier it was called cleanse server. The hub server is used to initiate the jobs, managing and controlling the threads in short hub server is master component in Informatica. The logs are generated when we access the Administration section of the MDM hub. Especially when we validate the ORS . These logs are captured in Hub server logs.

          Location :- <Informatica MDM install folder>/hub|server|log

        e.g./abc|hub|server|logs|cmxserver.log

2. Process server logs

           When we execute the jobs such as Stage, load, tokenization, match and merge jobs, the logs are captured in the process server logs.

            Location :- <Informatica MDM install folder>/hub|Cleanse/logs

             e.g . /abc/hub|cleanse/logs|cmxcleanse.log

3. E360 logs

              We can configure the user Interface using the provisioning tool. The User Interface is called Entity 360 application. When we access the application the logs are generated.

              Location:- <Informatica MDM install folder>/hub|server|logs 

              e.g. /abc/hub|server|logs|entity360view.log

4. Provisioning logs

             We use provisioning to configure business entities, transformations, views, tasks, and E360 applications. When we use the provisioning tool the logs are generated and stored below location.

             Location :- < Informatica MDM install folder>/hub|server/logs 

             e.g. /abc/hub|server|logs|provisioning.logs





5. Post Install logs

               The post-install logs are generated when we install Informatica MDM as well as when we apply EBF or upgrade.

               Location :- <Informatica MDM install folder>/hub|server|logs

              e.g. /abc/hub|server|logs|postInstallSetup.log

6. Elastic search logs

                If you are using Elastic search in your Informatica MDM then you may need to use Elastic search logs.

                Location:- <Elastic search folder>/logs 

                e.g. /aqr|logs|elastic search.log

7. Application server logs

                Application server logs are located as below

                a) Jboss

                 <Jboss home>/standalone/log/server.log

                  b) Weblogic

                   < Weblogic home >/domains/domain name/servers/server name/log/abc.out

                  c) websphere

                   <websphere home>/Appserver/profiles/profile name/logs/server name/systemout.log

8. Database logs

                  Database logs are not directly accessible. You need to reach out to your DBA to get database logs.


           Learn more about Informatica MDM here







        



Friday, August 20, 2021

What is difference between 'Remove from match list ' and 'Not a match' in IDD ?

             Are you looking for an article on Informatica Data Director which explains what is the difference between 'Remove from match list' and 'Not a match' options which are available in IDD application in Informatica Master Data Management? If so, then you reached the right place. Let's understand these two options here.


A) What is the match process?

                Informatica MDM comes with a process named match process. With help of this process, we can determine potential matching records. In other words, we can remove duplicate records from the system. Informatica Data Director application uses a match engine that comes with MDM in order to achieve it.

                   IDD application uses a match engine at the time of processing manual match records as well as at the time of creating a new record. This requires some specific to be mode using IDD configuration manager.






B) Where in the IDD application we can find the match feature?

                IDD application is used by data stewards or business users to manage the data. In order to manage the data, records need to be first searched and then opened. Once the record is opened we can see data, Xref, timeline, history, and match sections. The match section shows potential matching consolidate to the given record.


C)  Difference between ' Remove from match list ' and ' Not a match' in IDD

                  As discussed in the earlier section, potential matching records are available in the match section. If uses are working on a manual merge queue using this match section the users either can merge the record or can perform one of the below actions on the merge task.

                  1. Remove from match list

                  2. Not a match





                 The remove from match list removes the record from the match view in the IDD application. If the user logins again the record will be shown again on the screen.

              On the other hand, if the user selects the ' Not a match ' action then the matching entry will be deleted from match table. The record will not be shown in the IDD view anymore. This will also delete the merge task.

              Learn more about the merge process here -

 


Thursday, August 5, 2021

What is lifecycle of consolidation_ind ?

    Are you looking for details about consolidation indicator in Informatica MDM? Are you also interested in knowing what are valid values for the consolidation indicator column? If so then you are right place. In this article, we will also explore the lifecycle of consolidation indicator for a record.


A) What is a consolidation indicator?

          The consolidation indicator is a column in the base object table. The consolidation process is also known as the merge process updates the consolidation indicator value for a record based on the record's process.






B) What are valid values for consolidation_ind?

          When we execute load jobs, Match job and merge jobs record goes through various processes because which value of consolidation indicator column goes through the values below

        4 : New record

        3 : Record queued for the match process

        2 : Record has gone through the match process

        1 : Consolidation record  

a) Consolidation_ind = 4

           The actions below cause record to have consolidation_ind = 4

          1. Inserting a new record in the Informatica hub 

          2. Queue a record as the new using data manager 

          3. When we unmerge the record either through API or E360 application

b) Consolidation_ind = 3

          The actions below cause record to have the consolidation_ind=3

           1. When we queue a record for a match using data manager

           2. If match job fails, the records that were picked up for matching but the match did not complete for that record will have value for consolidation indicator as 3

C) Consolidation_ind = 2

         The actions below cause record to have the consolidation_ind =2

         1. When the match process completes for the record

         2. When the record is queued for merge using data manager or any API or application.





d) Consolidation_ind = 1 

          The actions below cause record to have consolidation_ind =1 

         1. For the Golden source system the records will be loaded with consolidation_ind=1

         2. When the record is accepted as unique 

         3. If Accept Record as unique is set to yes and match process does not find matching record.

         4. If Accept Record as unique is set to yes and after merging records, it does not have any more matches.

e) Consolidation_ind =9

         If a business user puts the record on hold.

 

      Learn more about consolidation indicator in Informatica MDM here -



Tuesday, July 27, 2021

Top 10 commonly used commands in Snowflake

        Are you looking for details about commonly used commands in Snowflake? Are you also interested in knowing what are DML, DDL snowflake commands? Then you have reached the right place. In this article, we will explore more about snowflake commands.

A) Types of commands in Snowflake

          There are four types of commands present in snowflake and those are 

          1. DDL - Data Definition Language

          2. DML - Data Manipulation Language

          3. DCL - Data Control Language

          4. TCL - Transaction Control Language

         In this article, we will focus on DDL and DML commands use in Snowflake






B) What commonly used DDL commands in Snowflake?

            Here is the list of DDL commands used in snowflake

           1. ALTER

           2. CREATE

           3. DROP 

           4. USE

           5. SHOW


           Let's see each of these commands one by one -

1. AFTER - AFTER command is used to modify metadata of on account level, parameters for session or metadata of the database object

Syntax : 

              AFTER <object_type> <object_name> <actions>

e.g  AFTER SESSION SET < params >


2. CREATE - CREATE command is used to create new object

Syntax : 

              CREATE <object_type> <object_name>

e.g.  CREATE  DATABASE ABC


3. DROP - DROP command is used for removing object from system.

Syntax : DROP <object_type> [IF EXISTS] <identifier>

e.g  DROP USER [IF EXISTS] abc_ user


4. USE - USE command is used to specify role, warehouse database ,or schema for current session

Syntax : USE WAREHOUSE <name>

e.g   USE WAREHOUSE xyz 


5. SHOW - SHOW command is used to provide metadata for the object.

Syntax : SHOW <object_ type_plural> [LIKE '<pattern>']

e.g  SHOW PARAMETERS [LIKE '<pattern>']






C) What are commonly used DML commands in Snowflake?

         Here is the list of DDL commands used in Snowflake

          1. INSERT

          2. MERGE

          3. UPDATE

          4. DELETE

          5. TRUNCATE

             Let's see each of these commands one by one.


1. INSERT - INSERT command is used to insert one or more rows into the table.

       Syntax :

       INSERT INTO <table_name> [< column_name>]

           VALUES (<value>|DEFAULT|NULL,...)

        

       e.g

        INSERT INTO TAB_ABC  (id, name)

         VALUES ( 100 , ' DRONA')

2) MERGE - MERGE command is used to Insert, delete and update values in a table based on values in a subquery or another table.

         Syntax : 

         MERGE INTO <table_name> USING <source> ON <join_exp>L

         e.g

         MERGE INTO TAB_ABC USING TAB_PQR ON TAB_ABC.ID=TAB_PQR.ID WHEN MATCHED Then 

         Update set TAB_ABC .NAME = TAB_PQR.NAME


3. UPDATE: The UPDATE command is used to update rows in the table.

       Syntax : 

              UPDATE <table_name> SET <field>=<value>

       e.g 

            UPDATE TAB_ABC SET NAME = 'XYZ'


4. DELETE : DELETE command is used to delete records from the table.

         Syntax : 

              DELETE FROM <table_name > WHERE <condition>

          e.g 

              DELETE FROM TAB_ABC WHERE NAME='BOB'


5) TRUNCATE: TRUNCATE command is used to remove records from the table including privileges and constraints.

          Syntax :

              TRUNCATE TABLE <table _name>

          e.g 

              TRUNCATE TABLE TAB_ ABC.







Tuesday, July 20, 2021

Top 10 things you need to know before implementing Informatica MDM?

           Are you planning to implement Informatica Master Data Management aka MDM? Are you not sure what are the things you need to consider before considering MDM solution? If so, then you reached the right place. In this article, we will see the top 10 things which you need to know before implementing Informatica Master Data management in your organization. 

 1.Data Quality Measurement 
            You need to know how you are currently measuring data quality not only in a single project but also across the enterprise. This will give you two benefits one, you will know better options for quality of data measurement, and second, a baseline to measure the quality of data after MDM implementation.





 2.MDM and Data Quality
              Is there a relationship between master Data management and Data Quality? Can MDM help in improving data quality? The answer is Yes. However MDM and Data Quality are two distinct processes in any organization. You need to know what is the relationship between Master Data Management and Data Quality.

 3.Returns of Improved data quality?
               We initiative various projects for improvement in the processes and to achieve better results on Investment. You need to know what is the return you will be getting after improving data quality.

 4.Data Governance 
              Data governance is a crucial part of the business. Are you aware of how is data governance is implemented in your enterprise? You need to have proper data governance to get optimum benefits from MDM implementation.

 5.Data for business strategy 
               It is not new that this Era of data. We are in data 4.0 where the majority of the businesses are data-driven. You need to plan your business strategies based on data that is of great quality and well maintained.





 6.Data enrichment
               Why do you need data enrichment? One may ask this question. The answer is to make better decisions and recommendations we need to take important steps towards data enrichment.

 7.Privacy regulations 
               These rules and regulations we need to follow a business. We need to be fully aware of those rules and regulations and consider those implementing any MDM solutions.

 8.Customer satisfaction 
                Is your customer satisfied with your services? What are your customer's preferences and how are you managing these? How are addressing your customer's concerns and feedback? These are important questions you need to answer so that you improve those with MDM.

 9.Risk measurement and assessment 
                Informatica MDM defiantly plays a vital role in risk assessment and measurement. However, you need to know your current solutions and look for better opportunities to improve those.

 10.Future Perspective
             While implementing Informatica MDM, you need to look for long-term benefits instead of short-term MDM with great benefits for the long run.

Learn more about information MDM here

       

Sunday, July 11, 2021

Howto fix - "ORA-00245: control file backup failed" issue

 Are you looking for an article about fixing "ORA-00245: control file backup" error? If so, then you reached the right place. In this article, we will see what is root cause of "ORA-00245: control file backup" error message at the database side and how to fix it?






What is "ORA-00245: control file backup" error ?

This error message occurs when Archive logs backup fails. This is Oracle database level functionality which takes backup of archive logs so that you can restore your device quickly and seamlessly in the event of data loss.


Whar the error messages are associated with "ORA-00245: control file backup"?

Here is the list of error messages associated with "ORA-00245: control file backup" -

  • RMAN-03009: Failure of full resync command on default channel
  • RMAN-03002: Failure of configuring command
  • RMAN-03014: implicit resync of recovery catalog failed
  • ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage

 

What is the root cause of "ORA-00245: control file backup" error message?

The root cause can be the failure of   SNAPSHOT CONTROLFILE due to local file system configuration e.g. /abc CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/<oracle home>/snapcf_pqr1.f'


However, you need to find out all possible other root causes.

 

How to fix  "ORA-00245: control file backup" error?

In order to fix this error message change SNAPSHOT CONTROLFILE NAME  RAMN parameter to shared storage e.g. +RECO_PROD



Learn more about Oracle here -






 


Thursday, July 8, 2021

What is Active VOS Central , Active Console and Active VOS Process designer ?

     Are you looking for details about Active VOS Central, Active VOS Console, and Active VOS designer? Are you also would like to know what is process designer? If so, then you reached the right place. In this article, we explore these tools.





A) Active VOS Central

      Active VOS Central aka process central is used by business users. The business user uses process central to work on tasks such as approving or rejecting the requests. Once tasks are generated by automated business process engines, those will be queued to pool for business users. The request form can be submitted to start an automated business process.

        We can perform the following actions in Active VOS Central -

         a) View Task 

         b) Claim Task 

         c) Approve Or Reject Task

         d) Provide Comments

         e) Attach Files 

         f) Refresh Task

         g) View Task history

         h) Assign Tasks to other users  

B) Active VOS Console

       The Active VOS Console is used by an administrator to monitor & Fix processes related to the task. The administrator can also perform actions such as deployment of workflows, test user connectivity, configure URN mapping, delete or schedule faulted tasks, Fix any active VOS related issues using Active VOS Console.

       The Active VOS Console is a thin client application that can be accessed using the browser.





C) Active VOS Designer

       Active VOS designer is also known as process designer. Process designer is used by developers to create a new business workflow. It can be also used to update or customize existing workflow. Process designer contains drag & drop components for easy development and least programming. However, it extensively uses expression language.

       Active VOS central, Active VOS Console, and Active VOS designer are part of the Informatica Product suit. Learn more about Active VOS - here



Wednesday, July 7, 2021

How to achieve better stage job performance in Informatica MDM

      Are you working on MDM and want to understand its different fields, then this is the right place in this article we are going to know about what are the product recommendation, thread setting properties, and database recommendations of MDM stage job performance


 A. Thread setting for MDM Stage Job Performance

       In this article, we will see what are the different reasons for issues and their solutions regarding stage job performance.

     1) Post/ Pre Stage UE

          The reason for this issue is it we are going to write the query regarding inserting or updating and record, it will create locks and because of this it slows down the performance to avoid this there is a need to put logger statement in UE  code. If doable rerun the jobs by removing the UE code.





     2) Cleanse Function 

            Each in every record will have to go through a cleansing process so here need to check that if any cleanse function is taking more time to process.

           Depends on which type of cleanse function you are using there is a need to check network latency also we need to check of IDQ  end.

     3) Directory 

            The main reason for the directory issue is having a shared directory within 2 different instances of the process server.

           To avoid file locks each process should have its own directory.

     4) Tables  

            The system will reduce the performance if the tables like RAW, REJ, C_REPOS_JOB_CONTROL table contain a huge amount of data.

           We can keep the important data and archive old data. 

   5) Log level

           We will have to keep the logging level to INFO.


B. Thread setting for MDM Stage Job Performance

          The configuration that can be updated :

       1) Threads for cleanse operation (HUB Console)

          Max value = ( Number of CPU cores -1 )

        2) In cmxcleanse properties

            (cmx.server.cleanse.min_size_for_distribution)

            Default size is 1000





        3) com.informatica.mdm.batchcontroller.Batchjob.min_rec_for_multithreading

            The default size is 1000. We can decrease the size if multithreading is enabled and no. of records are lesser than 1000.


C. DB Recommendations for MDM Stage Job Performance.

        1) collect AWR reports.

        2) For the DB performance, collect TESTIO results


D. Appserver Recommendation for MDM Stage Job Performance

        1) when we run the processing server and DB server jobs, check that the CPU is going high.

        2) Have to check basic java arguments such as - xmx value.

           


 

What are differences between multimerge and merge API in Informatica MDM

                Are you interested in knowing what is the use of multimerge and merge APIs? Are you also would like to know the difference b...