DronaBlog

Sunday, August 29, 2021

What is difference between TRUNCATE and DELETE

         Are you working on databases or learning the database concepts, and want to know about the basic commands related to table like DELETE, TRUNCATE, then this is the right place. In this article, we are going to learn what is the meaning of these commands and the purpose of using these. TRUNCATE and DELETE command does the same job but slightly in a different manner. In this article, we will see what is the difference between the DELETE and TRUNCATE commands.






TRUNCATE command

- TRUNCATE command is DDL (Data Definition Language) command.

- If you use truncate command, then it will delete the data in a table and not a table itself.

- TRUNCATE locks the table but does not locks the rows, as it removes all the data from the table.

- It removes all the rows of the table, but the structure of the table remains as it is. It does not delete the   

   table structure, columns, indexes and constrains.

- This command does not require where clause.

- After truncate operation rollback process is not possible because this command does not maintain any log file from which we can rollback the data.

                                  Syntax –

                                TRUNCATE TABLE table_name;

                                 Example –

                                 TRUNCATE TABLE Customer;

DELETE command

-                           DELETE command is DML(Data Manipulation Language) command.

-                          DELETE command deletes all the records from a table.

-                          DELETE locks the rows, for deletion each row in the table is locked.

-                         If we want to delete a specific row/record from a table then we can use the WHERE clause.

-                       The table structure, indexes, attributes will not be deleted after the DELETE operation.

-                        Rollback operation can be possible after DELETE operation but we have to rollback the data before the COMMIT statement. After the COMMIT statement we can not rollback the data.

 

·                                                                     Syntax –

                         DELETE FROM table_name;

                                     Example -

                         DELETE FROM Customer;

 

                                    Syntax (Using WHERE clause)-

                        DELETE FROM table_name WHERE condition;

                                    Example-

                                    DELETE FROM Customer WHERE id=1;

 The above delete operation will delete all the records from the ‘Customer’ table without deleting the ‘Customer’ table itself.





Difference between TRUNCATE and DELETE command

-                   DELETE command deletes the specific commands based on conditions defined by the WHERE clause,        but it does not free up the space.

-                 TRUNCATE command does not require a WHERE clause. After executing the TRUNCATE statement it releases all the memory along with the removal of data.

-                DELETE command maintains the log but the TRUNCATE command does not maintain the logs, hence the TRUNCATE command is faster than the DELETE command.

-               TRUNCATE command uses less Transaction space.





 

 

Tuesday, August 24, 2021

What is data science and use of it ?

           Are you looking for details of science? Are you also interested in knowing what is the use of Data Science in the real world? If so, then you reached the right place. In this article, we will explore Data science and we will also learn about predictive analytics and prescriptive analytics.






A) What is Data Science? 

              Data science is a blend of computer science, Business domain knowledge, and mathematics & statistics. It also includes machine learning, data analytics, and advanced analytics Data science is used to discover various patterns present in the data.


B) What is the difference between data statistics and data science?

                 Data statistics or data analytics includes business administration and exploratory data analysis. On the other hand, data science includes data product engineering, machine learning, advanced algorithms along with exploratory data analysis. In short, data statistics explains what is going on by using data history. However, data science also explains what is going on along with identifying future events by using machine learning.

                Data science uses predictive casual analytics, prescriptive analytics and machine learning.

C) What is predictive causal analytics?

                Predictive causal analytics is used to build a model that can predict the possibilities of a particular event in the future for example if you are a banker and giving a loan to the customer and would like to know the probability of a customer making a loan payment on time. Here, we can develop a model that can perform predictive analytics over a period of time.


D) What is prescriptive analytics?

                Prescriptive analytics is used to build a model that has the intelligence of taking decisions and also the ability to modify itself based on dynamic parameters. For example, Tesla's self-driving car collects data for the history of driving of thousands of miles by a different scenario like signal light, turns etc. By applying intelligence it will enable the car to take decisions like when to take turns etc.






E) Machine learning

                  Machine learning is a method of analyzing is used for making predictions. Machine learning is also used for pattern discovery.

                 Various tools using machine learning to enhance capabilities such as   


      

Saturday, August 21, 2021

What are the log Files in Informatica MDM ?

        Are you trying to analyze the issue in Informatica MDM? Are you looking for the details of the log files which are generated during various Processes in the Informatica MDM? This article will explore more about the log files, their locations, and when to use those.





A) Introduction

         Informatica MDM has various components such as application server, database, business process management tool, Application user interface such as Entity 360 or customer 360. Each of these components generates logs throughout its processing.

         Here we will understand various types of log files and these are

         1. Hub server logs

         2. Process or cleanse server logs

         3. E360 logs

         4. Provisioning logs

         5. Post Installation logs

         6. Elastic search logs

         7. Application server logs

         8. Database logs

1. Hub server logs

         Informatica MDM has two core components: hub server and process server earlier it was called cleanse server. The hub server is used to initiate the jobs, managing and controlling the threads in short hub server is master component in Informatica. The logs are generated when we access the Administration section of the MDM hub. Especially when we validate the ORS . These logs are captured in Hub server logs.

          Location :- <Informatica MDM install folder>/hub|server|log

        e.g./abc|hub|server|logs|cmxserver.log

2. Process server logs

           When we execute the jobs such as Stage, load, tokenization, match and merge jobs, the logs are captured in the process server logs.

            Location :- <Informatica MDM install folder>/hub|Cleanse/logs

             e.g . /abc/hub|cleanse/logs|cmxcleanse.log

3. E360 logs

              We can configure the user Interface using the provisioning tool. The User Interface is called Entity 360 application. When we access the application the logs are generated.

              Location:- <Informatica MDM install folder>/hub|server|logs 

              e.g. /abc/hub|server|logs|entity360view.log

4. Provisioning logs

             We use provisioning to configure business entities, transformations, views, tasks, and E360 applications. When we use the provisioning tool the logs are generated and stored below location.

             Location :- < Informatica MDM install folder>/hub|server/logs 

             e.g. /abc/hub|server|logs|provisioning.logs





5. Post Install logs

               The post-install logs are generated when we install Informatica MDM as well as when we apply EBF or upgrade.

               Location :- <Informatica MDM install folder>/hub|server|logs

              e.g. /abc/hub|server|logs|postInstallSetup.log

6. Elastic search logs

                If you are using Elastic search in your Informatica MDM then you may need to use Elastic search logs.

                Location:- <Elastic search folder>/logs 

                e.g. /aqr|logs|elastic search.log

7. Application server logs

                Application server logs are located as below

                a) Jboss

                 <Jboss home>/standalone/log/server.log

                  b) Weblogic

                   < Weblogic home >/domains/domain name/servers/server name/log/abc.out

                  c) websphere

                   <websphere home>/Appserver/profiles/profile name/logs/server name/systemout.log

8. Database logs

                  Database logs are not directly accessible. You need to reach out to your DBA to get database logs.


           Learn more about Informatica MDM here







        



Friday, August 20, 2021

What is difference between 'Remove from match list ' and 'Not a match' in IDD ?

             Are you looking for an article on Informatica Data Director which explains what is the difference between 'Remove from match list' and 'Not a match' options which are available in IDD application in Informatica Master Data Management? If so, then you reached the right place. Let's understand these two options here.


A) What is the match process?

                Informatica MDM comes with a process named match process. With help of this process, we can determine potential matching records. In other words, we can remove duplicate records from the system. Informatica Data Director application uses a match engine that comes with MDM in order to achieve it.

                   IDD application uses a match engine at the time of processing manual match records as well as at the time of creating a new record. This requires some specific to be mode using IDD configuration manager.






B) Where in the IDD application we can find the match feature?

                IDD application is used by data stewards or business users to manage the data. In order to manage the data, records need to be first searched and then opened. Once the record is opened we can see data, Xref, timeline, history, and match sections. The match section shows potential matching consolidate to the given record.


C)  Difference between ' Remove from match list ' and ' Not a match' in IDD

                  As discussed in the earlier section, potential matching records are available in the match section. If uses are working on a manual merge queue using this match section the users either can merge the record or can perform one of the below actions on the merge task.

                  1. Remove from match list

                  2. Not a match





                 The remove from match list removes the record from the match view in the IDD application. If the user logins again the record will be shown again on the screen.

              On the other hand, if the user selects the ' Not a match ' action then the matching entry will be deleted from match table. The record will not be shown in the IDD view anymore. This will also delete the merge task.

              Learn more about the merge process here -

 


Thursday, August 5, 2021

What is lifecycle of consolidation_ind ?

    Are you looking for details about consolidation indicator in Informatica MDM? Are you also interested in knowing what are valid values for the consolidation indicator column? If so then you are right place. In this article, we will also explore the lifecycle of consolidation indicator for a record.


A) What is a consolidation indicator?

          The consolidation indicator is a column in the base object table. The consolidation process is also known as the merge process updates the consolidation indicator value for a record based on the record's process.






B) What are valid values for consolidation_ind?

          When we execute load jobs, Match job and merge jobs record goes through various processes because which value of consolidation indicator column goes through the values below

        4 : New record

        3 : Record queued for the match process

        2 : Record has gone through the match process

        1 : Consolidation record  

a) Consolidation_ind = 4

           The actions below cause record to have consolidation_ind = 4

          1. Inserting a new record in the Informatica hub 

          2. Queue a record as the new using data manager 

          3. When we unmerge the record either through API or E360 application

b) Consolidation_ind = 3

          The actions below cause record to have the consolidation_ind=3

           1. When we queue a record for a match using data manager

           2. If match job fails, the records that were picked up for matching but the match did not complete for that record will have value for consolidation indicator as 3

C) Consolidation_ind = 2

         The actions below cause record to have the consolidation_ind =2

         1. When the match process completes for the record

         2. When the record is queued for merge using data manager or any API or application.





d) Consolidation_ind = 1 

          The actions below cause record to have consolidation_ind =1 

         1. For the Golden source system the records will be loaded with consolidation_ind=1

         2. When the record is accepted as unique 

         3. If Accept Record as unique is set to yes and match process does not find matching record.

         4. If Accept Record as unique is set to yes and after merging records, it does not have any more matches.

e) Consolidation_ind =9

         If a business user puts the record on hold.

 

      Learn more about consolidation indicator in Informatica MDM here -



What is Glue Job in AWS?

An AWS Glue job is a managed ETL (Extract, Transform, Load) job used to process data in AWS. AWS Glue makes it easy to discover, prepare, an...