DronaBlog

Thursday, December 3, 2020

How to prepare for MuleSoft Certified Developer Certification - Part II

 Are you preparing for MuleSoft Certified Developer Certification and looking for some guidelines and material about how to prepare then you have reached the right place. During preparing for MuleSoft Certified Developer Certification, I captured notes. I thought let me share everyone so that it will be beneficial to whoever preparing for the certification. This is the second part of the notes. You can visit the previous part if you have not visited yet here - How to prepare for MuleSoft Certified Developer Certification - Part I. You can visit the third part of the notes here -  How to prepare for MuleSoft Certified Developer Certification - Part III






Item

Notes

Deploying and managing APIs

1. Deployment stages

                a. Deployed as web service using Runtime Manager

                b. Proxy application is deployed using API Manager and polices are created for security and governance

                c. Monitor and analyze usage during runtime manager and visualizer tools

2. Deployment options

                a. CloudHub - PaaS - Amazon ws

                b. Customer Hosted Mule Runtime

3. Deployment tools

                a. Embedded connection to runtime manager in Anypoint Studio

                b. Runtime Manager in Anypoint platform

4. Worker is dedicated VM that runs Mule application

                It runs in a separate container

                It is deployed and monitored independently

                It runs in a specific worker cloud in a region

                It runs a single application in cloudhub

                An application can be deployed in multiple workers

5. Application can be scaled vertically by changing worker size

6. Application can be scaled horizontally by adding multiple workers

7. API Proxy

                control access to web service, restrict access and usage through API gateway.

                Proxy Endpoint -> API Proxy (API Gateway) -> Backend API

                API proxy created  and deployed to API Gateway using API manager

                Proxy is used to create policy and service level agreement, migrate API between env, grant or deny access to API and review API analytics

 8. API Gateway

 API Gateway is responsible for running and managing API Proxy.

 API Gateway also authorize which traffic to pass through and access API by enforcing policies

 API Gateway meters traffic and logs and analytical data

 9. API auto discovery allows a deployed application to connect with API manager to download policies and act as its own API proxy

 10. Mule application can run on a clouldHub worker - at most one.

 11. API proxy application is not responsible to determine which response Mule event is allowed.

 12. Mule runtime uses embedded API Gateway to enforce policies and limit access to APIs

Accessing and modifying Mule events

1. The parameters send in URL are passed as query parameters in the attribute section of the header

2.  By default successful HTTP response contains

                Status code - 200

                Body - Payload

                Modify response by modifying body, adding a custom header, customize HTTP status code and reason code

3. DataWave is case-sensitive expression language

4. DataWave can be used to retrieve information from Attributes, Payload, and variables from Mule event.

5. Type of Datawve expression

                Standalone script - generated using transform Message

                Inline expression - used to set value of properties in the event processor or global configuration. Enclosed in #[]

6. Using DataWave expression, we can access all of the parts of Mule event including attributes, payload, variables, Mule Flow, Mule Application, Mule instance and Server

e.g Message information - #[message.payload]

    attributes - #[attributes.queryParams.param1]

                payload - #[payload]

                variables - #[vars.foo]

 

Examples

#[message.attributes.method] = #[attributes.method]

#[attributes.headers.host]

#[attributes.headr['user-agent']

 

#[message.payload.id]=#[payload.id]=#[payload['id']]

#[payload.item]

 

7. Selectors in DataWave expression

                . -> single value selector

                []. -> Indexed selector

                .* -> Multi value selector

                .. -> Descendants selectors

8. Core module functions are always available to use. Other modules need to be imported before use.

9. Attributes are available only in the request scope. If we need to refer to any value within flow then we need to setup it in a variable.

10. We can access mule event data at design time using datasense. We can access mule event data at runtime using console and debugger.

11. Attributes are replaced with new attributes after an outbound HTTP request is made.

12. Scope of variables and attributes

                The variables are accessible in childflow

                All the attributes passed to childFlow are removed or replaced

13. Variables set using a Set Variable operation is not accessible on the server.

 

 

Structuring Mule applications

1. Flow without source is called private flow. Private flow has an exception section

2. Private flow only accessible within application only

3. Subflow is a scope that has only a processor section Normally used to create a repeatable group of the processor. Subflow does not have an exception section

4. Subflow can be created

  a. Manually by dragging scope in canvas - Flow reference creation is required

  b. Dragging group of the processor from one flow and adding in another flow - Flow reference creation is not required as it generates automatically

5. VM connectors are used to make asynchronous calls.

                VM connectors can be used to communicate with another application running in the same Mule domain

                Queue types:

                a. Transient - faster but less reliable

                b. Persistent - reliable but slower

6. Domain project only can be hosted to customer hosted Mule runtime and not ClouldHub

7. Application properties can be configured as .yaml or .properties file

                Properties can be encrypted

                Properties can be overridden by system properties when deploying to a different environment

8. Accessing properties in below way

                a. In global elements or event processors

                                ${db.port}

                B. In DataWeave expression

                                {port: p('db.port')}

9. Metadata (appication-types.xml) can be accessed

                a. From Transform Message component

                b. From the Metadata tab in the properties view for event processes

                c. From a project menu in package explorer

10. VM connector allow a flow to pass events to another flow asynchronously.

11. pom.xml keep track of dependencies

12. ClouldHub workers CANNOT download all possible project dependencies a project may contain

13. ${http.port} property can be defined and used in Mule application to allow an HTTPS listener to be accessed by external web clients

14. ParentFlow with variable has childFlow

                The variable is accessible in childFlow and can be changed. The changes are seen back in parentFlow









How to prepare for MuleSoft Certified Developer Certification - Part I

 Are you preparing for MuleSoft Certified Developer Certification and looking for some guidelines and material about how to prepare then you have reached the right place. During preparing for MuleSoft Certified Developer Certification, I captured notes. I thought let me share everyone so that it will be beneficial to whoever preparing for the certification. This is the first part of the notes. You can visit the second part of the notes here -  How to prepare for MuleSoft Certified Developer Certification - Part II.






Item

Notes

Introducing application networks and API-led connectivity

1. Rate of change - the delivery gap has increased over time.

2. Central IT / Line of Business (LOB) IT - create reusable assets and make them discoverable and reusable

3. Modern API -  discoverable and accessible through self-service. Productized and designed for ease of consumption, secured, scalable and performance-oriented.

4. API-led connectivity

                a. Uses modern API

                b. Three layers - System APIs, Process APIs, Experience APIs

                c. Responsibility -

                                System APIs -> Central IT (Unlock assets and decentralize)

                                Process APIs -> LoB IT (Discover, reuse System API and compose)

                                Experience APIs -> Developers (Discover, self-serve, reuse, and consume process APIs)

                d. Advantages - reusable, agile, productive, better governance, speed within the same timeline

                e. Application network created using API-led connectivity is a bottom-up approach

5. Center for enablement (C4E) - cross-functional team called Center

                Responsibility - promoting consumption of assets in an organization

6. API - Application Programming Interface. It has Input, Output, Operation and Data Types

                Normally referred -> as API Specifications, a web Service (Implementation), an API Proxy (Controls access to web service, restrict access and usage through API Gateway)

7. Web Service - Method of communication between two software

                i) It has three meanings 

                        a. Web Service API  (Define how to interact with Web Service)

                        b. Web Service Interface (Provide structure) 

                        c. Web Service Implementation (Actual code)

                ii) Types - SOAP Based Web Service, RESTful Web Service

                iii) REST Web Service methods - GET, POST, PUT, DELETE etc.

8. RESTful web service response with status code.

                Status codes: 200 - OK (GET, DELETE, PATCH, PUT), 201 - Created (POST), 304 - Not modified (PATCH PUT), 400 - Bad request (All), 401 - Unauthorized (All), 404 - Resource not found (All), 500 - Server error (All)

9. API Development lifecycle - a) API specification (design), simulation (create prototype and make available to consumer) , validation (output - API specification/contract)

10. System - MuleSoft API-Led connectivity layer is intended to expose part of the backend without business logic.

11. Mulesoft is an application network is used - to create reusable APIs and assets designed to be consumed by other business units.

12. Center for Enablement - creates and manages discoverable assets to be consumed byline of business developers

13. Modern API - is designed first using an API Specification for rapid feedback

14. 'PUT' HTTP method in RESTful web service is used to replace an existing resource.

 

Introducing Anypoint Platform

1. Anypoint Platform - design, build, deploy and manage

2. Major components:

Design center - (Rapid development) Design API

Exchange - (Collaboration) Discoverable, accessible through self-service

Management center - (Visibility and control) Security, scalability, performance

3. Anypoint platform is used by

                Specialist, Admin, Ops, DevOps, Ad-hoc integrators, App developers

4. Supported platforms

                On-Premises, Private Cloud, Cloud Service Providers, Hosted By Mulesoft (CloudHub), Hybrid

5. Benefits of API-led connectivity

                Speedy delivery, actionable visibility, secure, future proof, intentional self-service

6. API Specification phase tools - API Designer, API Console and mocking service, Exchange, API Portal, API notebook --> output - Validated API Specification in RAML

7. Build or Implementation Phase tools -> Anypoint Studio, Munit

8. API Management Phase tools - API Manager, API Analytics, Runtime Manager, Visualizer

9. Troubleshooting and scaling - Runtime manager, API manager

10. Design center - To create Integration applications, API Specification, and API Fragments

                Flow designer - Web app to connect systems and consume APIs

                API Designer - Web app to design, document, mocking APIs

                Anypoint Studio - IDE to implement APIs and Build integration applications

11. Mule Applications can be created using Flow Designer or Anypoint Studio or writing code (XML)

                Mule Runtime environment decouples point-to-point integration. It also enforces policies for API governance

12. Mule applications accept and process a Mule event through multiple Mule event processors. All these plugged together in a flow.

                Flow is the only thing is executed in the Mule application.

                Flow has three areas - Source, Process area, Error handling

13. Mule cloudhub worker - is a dedicated instance of a mule which runs a single application

14: Mule event is the data structure has below components

                Mules Message

                                Attributes - metadata (headers  and parameters)

                                Payload - actual data

                Variables - declared using processors within the flow

15. Flow designer is used to design and develop a fully functional Mule application in a hosted environment

16. Deployed flow designer application run in CloudHub worker

17. Anypoint exchange is used to publish, share and search APIs

18. Using the design center we cannot create API Portals

Designing APIs

1. API Design approaches - Hand Coding, Apiary (API Blueprint), Swagger (Open API Specification), RAML

2. RAML used to auto-generate documentation, mock endpoints, create interfaces for API Specification

3. RAML Contains nodes and facets

                Resources are nodes. Start with /

                facets  are special configurations applied to resources

4. RAML code can be modularized using

                Data Types, examples, traits, resource types, overlays, extensions, security, schemas, documentation, annotations and libraries

5. Fragments can be stored

                In files and folders within a project

                In a separate API fragment project in the Design center

                In a separate RAML fragment in Exchange

6. As an anonymous user, we can make calls to an API instance that uses the mocking service but not managed APIs.

7. In order make API discoverable we need to publish it to Anypoint Exchange

Building APIs

1. Mule event source initiates the execution of the flow

2. Mule event processors transform, filter, enrich and process the event data

3. Variables which are part of Mule event are referenced by processors

4. Mule flow contains - Source, Process, and Error Handling

                Source - optional

                Process - required

                Error handling - optional

5. Default data responded in java format. Transform component is used to convert java to JSON format using DataWave

6. A RESTful interface for an application will have listeners for each resource method

7. We can create the interface either manually or generated from API definition

8. APIKit is the open-source toolkit comes with Anypoint studio and used to generate interface based on the RAML API definition.

                Generates main routing flow and flows for each API resource

                The generated interface can be hooked implementation logic

                APIKit creates a separate flow for each HTTP method

                APIkit router is used to validates requests against RAML API Specification and routes to API implementation

9. Anypoint platform uses GIT for version control which internally uses pull,    push, and merges operations for code edits.








Tuesday, December 1, 2020

Infomatica MDM - MDM Installation Topology

 Are you planning to install the Informatica MDM hub in the Development or Production environment? And looking for the details about the best possible way to make use of your infrastructure? If so, then you have reached the right place. In this article, we will explore different Informatica MDM installation topologies






Introduction

Basically, there are three types of topologies recommended by Informatica. We can use one of them while installing the Informatica MDM hub, based on project needs and benefits which we are looking for. Here is a list of recommended topologies

a. MDM Topology for Clusters

b. No Cluster - No High Availability

c. No Cluster - High Availability


A. MDM Topology for Clusters

In this type of topology Hub Server and Process server resides in a different machine and these are clustered together.


Characteristics:




B. No Cluster - No High Availability

In this topology, Hub server and Process servers are not clustered, hence we will not achieve high availability.



Characteristics:



C. No Cluster - High Availability

In this type of topology, Hub Server and Process are not clustered, however an external load balancer can be used to make the MDM system highly available.




Characteristics:







The detailed information types of MDM styles are provided here -















What is CRM system?

  In the digital age, where customer-centricity reigns supreme, businesses are increasingly turning to advanced technologies to manage and n...