DynaTrace - Learnings

 Digital Exeperience:


Session Segnmentation:

opens the user session - user session is the user journey ( interaction of user's device with your application). it's pretty much the sequence of user actions performed by the user within a certain period of  time.

the idea behind the user segmentation is that you can filter and depp dive into multiple users actions at the same time.

Below are the options on session segmentation:

Analysis over time:howmany seesions in total

Application Type: (web or mobile or geographical)

Application versions - which version is being used more in production.

Applcations:which application has more the user session.

User experience score: which shows % of satisfying sessions, % of tolerable sessions and % of frustrating sessions.

Errors and annoyances: shows howmany sessions have errors like 0 errors for 500 sessions, 1 errors for 300 sessions.

Conversions and bounces: shows howmany sessions converted, bouned sessions or netiher convereted and bournced.

Users: Shows howmany are new users or returning users. Also shows user types graph with real users, synthentic userds and Bot users.

Browsers: shows % of browser and version(i.e., chrome, safari, opera,firefix, explorer etc) being user.

Internet service provider: shows % of 

Operating systems: shows % of oS being used.

Locations: shows % of area


You can filter the session details by multiple above options.

Ex.. user acount count - you can filter users with > 50 sessions.

You can also create really complex filter.

Individual user session deep-dive:

click on user action link - which shows detaila about user session( browser type, activity,latency , geo etc),

User sessions query:

 here we can create custom adavanced queries for the completed user sessions and you can also generate visulization graphs.

ex- select city, Avg(duration) from country="usa" group by city.

Documentation will have syntaxs and fucntions of quieries.

Session Replay:

Another powerful tool that you have within the digital exeperience section is the session replay. the session replay will actually allow you to record how the users are interacting your application and played back.

It provides the recording of user session, this can be useful for the QA team to understand how the errors are produced by the user ,because you can see the exact path the user took and the actions.it can be helpful to understand twhere the user interface design is not really intuitive.

It can provide you additional information where perhaps the process flow with in the application is too complex for the users, where the application was slow for the users, where the application didn't work as expected on other browsers and devies, for users with recordeds sessions and much more.


Synthetic Monitoring Overview:

    Synthetic monitoring is all about making sure the your application is accessible and runs as it should from any location.

Many businesses make the mistake to check their web application from thier local env, from their machine or office or home.but they don't do checks whether the application runs as it should from different location( europe, asia etc) or a specific browser.

Dynatrace provides here is an easy way for you to monitor the availability and performance of your application as expericened by customers globally. so it is pretty muxha way to simulate user interactions with you applications.


there are diff types of syentic monitors:

1. single browser monitors, which is prettly much simulating a user visiting your application using a web browser.

2. browser clikpath - these rae simulated user visits that monitor your applications critical workflows. so this is an extact sequence of interactions that your user completeds within your application. 

and we have also have Http Monitors, supported browsers and sythentic monitor security.

Types of sythentic monitors - https://docs.dynatrace.com/docs/observe/digital-experience/synthetic-monitoring/general-information/types-of-synthetic-monitors

Synthetic monitoring Vs. real user monitoring - https://www.dynatrace.com/news/blog/real-user-monitoring-vs-synthetic-monitoring/


Sythenitac Monitor setup -


1. Configure a browser monitor

     

2. HTTP monitor


Synthetic Monitoring seetings - https://docs.dynatrace.com/docs/discover-dynatrace/references/dynatrace-api/configuration-api/calculated-metrics/synthetic-metrics


Observe and explore Notebooks and DQL( Dynatrace Query Language)

1. Notebooks Intro:

Notebook is a new feature from dynatrace, which lets you create really powerful and data driven documents. Notebook use the dynatrace query language DQL, which lets you do a lot of analysis with your data. Therefore, we are also going to be using notebooks in order to learn how to use Dynatrace query language.

You can query, analyze and visualize all of your observability data.

2. Create our first notebook using DQL

sections are building blocks of notebook.

 Sections can be main three things.

        * Query Grail

        this is gona disaly data returned from grail.

        * Add code

this is gona disaly data returned by code execueted from dynatrace.

        * Add markdown



fetch logs gives all the logs from grail based on the time filter. Dynatrace will limit to 1000 logs for performance issue.

options -> you can decide how to display the data. Raw, table, line etc


Create our first notebook part -2 - using DQL to fectch, filter, limit etc data


Sample Query - fetch logs

                            | limt 500  --- limit # of rows to 500

                            | fields timestamp, content, event.type, trace_smapled -- display the needed fields

                            | filter trace_smapled='true'  --filter where trace_smapled is avilable

                            | sort timestamp asc -- sort by ascending


Notebooks - Continue learning Dynatrace query Language(DQL):

 
EX-01: with cotains fucntion

   Fecth logs
    | Filter loglevel  = = "ERROR"
    | Filter contains(content, "code: 400")
// fetch all records with loglevel ERROR and error code 400

EX-02:  with endswith fucntion

Fecth logs
| Filter loglevel =="ERROR"
| Filter endswith(content, "platform is either empty or invalid")


DQL - OR, Filtering out, summarize resutls, timeseries, CountIF

Ex-1:OR

Fecth logs
| Filter (loglevel =="ERROR" OR loglevel = ="INFO")
| Filter endswith(content, "platform is either empty or invalid")


Ex-2:Filterout

Fecth logs
| Filterout loglevel = ="INFO"
| Filter endswith(content, "platform is either empty or invalid")

Ex-2:Summarize

Fecth logs
| Filterout loglevel = ="INFO"
| Summarize loglevel = Count()

Ex-3:Summarize

Fecth logs
| Filterout loglevel = ="INFO"
| SmakeTimerseries count=count(), by:loglevel, interval: 5m


Ex-4:Countif

Fecth logs
| Summarize errors = CountIF(loglevel == "ERROR")

Ex-5: INFO, Warning logs

Fecth logs
| Summarize errors = CountIF(loglevel == "ERROR")
                      info = countIF(loglevel == "INFO")
                    warning = countif(loglevel = = " WARNING")


DQL - Visualizations:

 summarize by log.level

ex1: fecth logs
| summarize count=count(), by:{level=loglevel}

output:
level    count
Debug 140
Error 100
INFO 2500

you select to options and select suitable chart for summarized data

Notebook Markdown:

Markdowns allows you to input tect links and images on notebooks.


Editing Notebook sections:

you can duplicate the chart on notebook and edit the query, You can add notebook to dashboard , sort the chart up and down.

DQL - Sampling data:

Analyzing the data can be hard when you have thousands and even millions of rows, which makes it simply impossible to perform analysis on all of your data due to performance and cost issues. in this case will use the sampling to anlayze data of larger data set.


ex-1: fetch logs samplingratio: 1000

| summarze count=countIF(loglevel == "ERROR")

|filedsadd count=count*1000

Result:

if there are 89000 errors, when you use sampling of 1000, result would show 89. when use the sampling you will get number close to the reaality.

DQL - Bin data for better analysis

Bin function will show you the trend of the errors from samplingratio

ex1:

fetch logs samplingratio: 1000

| summarze count=countIF(loglevel == "ERROR"), by:{bin(timestamp, 1h)

|filedsadd count=count*1000

Above query will show you error count for each hour based on selected timeframe.

DQL - best practice on query structure

best practice query:

fetch logs // fecth the logs

| filter contains(logmessage, "failed") // filter early will help to narrow down the logs

| filterout severity = = "NONE" // filterout can remove data which is not needed

| fileds severity, timestamp, content // keep only specific fields

| summarize count=count() // you should have always summarize before sot

| sort timestamp desc // sort

| limit

TIP - DQL Reference:

https://docs.dynatrace.com/docs/discover-dynatrace/references/dynatrace-query-language - you can validate the DQL functions here.


section 13: Observe and Explore - Data Explorer










    


Comments

Popular posts from this blog

web_set_max_html_param_len

Lr Advance Param Function In Loadrunner With Example

Error -27492: "HttpSendRequest" failed, Windows error code=12057 (certificate revoked) and retry limit (0) exceeded for URL="