New hazard charts in General Analysis

Two new charts have been added to the General Analysis application related to assessing hazard with the frequency-magnitude relationship. The new charts plot various hazard parameters over time, or, by time of day:

                Charts / Time Series / Hazard over Time

                Charts / Diurnal / Diurnal Hazard

The following parameters can be plotted in each chart (maximum two at a time):

Mmin – The magnitude of completeness. The magnitude, above which the dataset is complete.

b-value – The slope of the Gutenberg-Richter distribution, describes how the frequency of events scales with magnitude.

N at Mref – The number of events (N) above the reference magnitude (Mref, user defined). Note for reference magnitudes less than Mmin, N will not reflect the actual number of events in the database, since it is based on the Gutenberg Richter distribution, assuming a complete dataset.

Hazard Probability – The probability of an event exceeding the design magnitude (user defined) within one year.

Hazard Magnitude – The magnitude that, to a certain reliability (user defined), won’t be exceeded within one year. Hazard Magnitude is essentially the inverse of Hazard Probability.

Each chart is generated by breaking up the data into bins and fitting the Gutenberg-Richter distribution. The bin width can be set in the control panel. Since there can be a lot of variability in the data and fitting procedures, there are also controls to smooth the results, with a user defined bandwidth.

The figure below is an example of the Diurnal Hazard chart, showing how the b-value varies based on the time-of-day. The b-value drops from around 1.3 to 0.7 during shift change. This represents a large difference in hazard, which is highly sensitive to b-value (illustrated in previous post).

Note that the hazard calculations assume a constant b-value within the analysis volume. This can result in an underestimated hazard (explained in the Hazard Iso’s blog post). For more accurate results, use the hazard assessment application, where the volume is discretised and the probabilities are integrated together from the small scale, to the larger scale.

If you would like to arrange a root upgrade to get these charts, let us know at support@mxrap.com.

 

Background filters in the hazard app

The new background filters have been added to the Hazard Assessment application. The time of day filter can be used to see the effect of removing events during blasting/shift change on the hazard results. You can either view the results in raw or normalised form. The hazard calculations do normalisation for the event rate calcs anyway, to represent hazard in yearly terms. If your analysis period is 6 months, the number of events is doubled to represent a year’s worth of events. When applying the time-of-day filter though, the actual analysis period is less than 6 months, because several hours per day have been removed. Without normalisation, the hazard should always drop when applying the time-of-day filter, because you are removing events, and nothing else changes (i.e. still using 6 months). If normalisation is turned on, the time period that has been removed is accounted for in the hazard calculations. The results then represent accurately the state of the hazard during the relevant times of day.

Normalisation also applies to the short-term responses filter, where events can be removed based on a time and distance from a blast or significant event. In this case the normalisation is a bit more complicated. With the time-of-day filter, the effective analysis period is the same for the whole grid. In this case however, there will be an uneven distribution of space and time removed from the analysis. So, each individual cell has its own effective analysis period, based on how many triggers (and responses) are nearby. The idea is still the same though, without normalisation, the hazard will drop due to the removal of events without adjusting the analysis period. With normalisation turned on, the results will represent the hazard state outside of short-term response regions.

A new chart has been added to the Hazard Assessment app that shows the effect of different short-term response filtering on hazard. The chart works in a similar way as the Track Volumes over Time chart, by computing the hazard over and over again, automatically changing variables with each run. The chart and associated control panel can be found in the Hazard Assessment / Hazard ISO’s window, under the Response Analysis menu. To generate the chart, you need to specify a maximum response time, a time delta, and response distances (up to 6). The hazard will be calculated for each response distance and for each response time from zero to the maximum (at delta intervals). The hazard recorded is the probability of exceeding the design magnitude within the chosen grid, which is the value displayed in the footer of the 3D ISO view. It can take some time to calculate, depending on how many iterations you specify. The video below shows the chart being generated for response times up to 72 hours and response distances of 50, 100 and 150 m.

 

Stochastic Declustering Explained

As mentioned in the last blog post, a stochastic declustering algorithm has been implemented in mXrap to separate events into ‘clustered’ and ‘background’ components. It can be useful when designing seismic exclusions and re-entry procedures to separate seismicity that occurs in short bursts from seismicity that has low variability in space and time. Short-term exclusions cannot be used to manage the risk associated with background seismicity, since the hazard inside a potential exclusion would be the same as outside the exclusion. Efficient exclusion and re-entry procedures target areas where seismicity is most clustered and where the seismic hazard to which people are exposed can be reduced with a short disruption to production.

The filter controls for stochastic declustering in General Analysis are in ‘Event Filters / Background Activity’ and a new chart has been added to show the cumulative events of the two components in ‘Charts / Time Series / Declustered CNE’. An example of the cumulative declustered events chart is shown below for a week’s worth of events at the Tasmania mine. In this case approximately 32 % of events have been classified as ‘background’.

 

 

The declustering is based on the distribution of inter-event times (time between successive events). The distribution (PDF) of inter-event times has been shown to follow the gamma distribution (Corral 2004). The chart below shows how the events in the example above (black crosses) closely follow the gamma distribution (red line).  Hainzl et al. (2006) showed how to estimate the rate of background events from the gamma distribution, based on the mean (µ) and standard deviation (σ).

Background Proportion = µ2 / σ2

Background seismicity is generally assumed to be stationary and Poissonian. In other words, the average time between events is constant and known, but the exact timing between events is random. Each event is assumed to be independent and not affect the occurrence of other events. The inter-event time of a Poisson process follows the exponential distribution (green line).

The event distribution clearly deviates from the background distribution for small inter-event times. This deviation is caused by the clustered component of seismicity. The distribution of small inter-event times corresponds to the inverse distribution (yellow line), which is explained by sequences that follow the Modified Omori Law (MOL). In this case the slope of the distribution corresponds to the MOL with decay parameter, p ≈ 0.8.

The declustering method was described by van Stiphout et al. (2012). The probability that an event is part of the background (purple line) is calculated based on the inter-event time and the ratio between the background and gamma PDF’s. Events with small inter-event times are more likely to be clustered events. Events with large inter-event times are more likely to be background events.

 

 

It is important to note the random component in the declustering process. Each specific event may be classed as either ‘clustered’ or ‘background’ each time you run the declustering, although the overall proportions will remain the same (hence the ‘stochastic’ in stochastic declustering). There is also no consideration given to the spatial clustering of events, all events are assessed together in the time domain. There is also no consideration given to the magnitude of events.

The rate of background events is assumed to be constant although in reality the background rate will slowly vary over time, related to changes in system sensitivity, general rates of extraction and different mining locations. To account for long-term fluctuations in background rate, events are broken down into groups, and the background proportion is computed separately for each group. Groups of events are kept as small as possible, with a minimum number of events and minimum time period (user defined). The background rate is constant within each group.

Aside from General Analysis, the stochastic declustering process has been added to the Hazard Assessment, Short-term Response Analysis, and Seismic Monitoring apps. The background filters in the hazard app can be used to compare the seismic hazard of clustered and background seismicity (as per below). Background rates are also calculated for triggers in the short-term responses app and for the reference rate in the activity rate monitoring window.

 

 

For those wishing to read more about the declustering process, the CORSSA article by van Stiphout et al. (2012) is a good summary of many different approaches used in earthquake seismology, including the method described here.

New Background Filters

We have added some new event filter options to General Analysis related to ‘background’ activity. ‘Background’ events are generally defined as having low variability in space and time. The new background filters aim to identify events that are clustered in space and time and the user can either display the ‘clustered’ or the ‘background’ component of seismicity.

There are three ways of classifying clustered events; by time-of-day, by proximity to blasts and significant events, and by a stochastic declustering procedure. Stochastic declustering will be explained in a separate blog post.

With the time-of-day filter, you can specify up to five periods of the day, to define increased activity around shift change/blasting times. Times are entered in hours, e.g. 5:30pm = 17.5. Events within these periods will not be shown by default but you can toggle/invert the time-of-day filter to only show events inside the time-of-day periods (and hide events outside).

With the short-term responses filter, you can define a time period and spherical radius around blasts and significant events to filter out events. Use the normal blast filter to control which blasts are considered. Significant events are considered if they are within the current base filter, and above the specified magnitude. Note that the significant event itself is not filtered out (it is treated as a background event, not a clustered event). Just like the time-of-day filter, you can toggle/invert the filter to only show the responses, and hide events outside the response windows.

The last filter option is an automatic classification system for separating background and clustered events. You can toggle between each component of seismicity defined from stochastic declustering. Watch out for the next blog post if you are interested in the details of how this method works.

This is a new addition to the General Analysis application. Find the panel under ‘Event Filters / Background Activity’. If you don’t see this option, you need a root upgrade. Get in touch with support to arrange an upgrade by emailing support@mxrap.com.

 

Moment Tensors – A Practical Guide

Moment tensor analysis is a topic that carries a decent level of uncertainty and confusion for many people. So I’m going to lay it out as simply as I can. For this post, I’m not going to go into too many details on how moment tensors are actually calculated. But, I’m going to summarise the things I think are most important for geotechnical engineers to know for interpreting moment tensor results.

 

OK, so, what is a moment tensor?

 

A moment tensor is a representation of the source of a seismic event. The stress tensor and the moment tensor are very similar ideas. Much as a stress tensor describes the state of stress at a particular point, a moment tensor describes the deformation at the source location that generates seismic waves.

You can see the similarity between the stress and moment tensors in the figure below. The moment tensor describes the deformation at the source based on generalised force couples, arranged in a 3 x 3 matrix. Although, the matrix is symmetric so there are only six independent elements (i.e. M12 = M21). The diagonal elements (e.g. M11) are called linear vector dipoles. These are equivalent to the normal stresses in a stress tensor. The off-diagonal elements, are moments defined by force couples (moments and force couples discussed in previous blog post).

 

 

Producing a moment tensor of a seismic event requires the Green’s function. This function computes the ground displacement recorded by the seismic sensor based on a known moment tensor (the forwards problem). A moment tensor inversion is when the inverse Green’s function is used to find the source moment tensor based on sensor data.

 

 

Sure… but what’s with the beach balls?

 

It’s pretty hard to interpret a 3 x 3 matrix of numbers, so moment tensors are usually displayed as beach balls, either 2D or 3D. I will mostly discuss the 3D case; the 2D diagram is just a stereonet projection of the 3D beach ball.

The construction of a beach ball diagram is very simple. For each point on the surface of a sphere, the moment tensor describes the magnitude and direction of the first motion. If the direction of motion is inwards, towards the source, the surface is coloured white (red arrows). If the direction of motion is outwards, away from the source, the surface is coloured black (blue arrows). Where there is a border between black and white on the beach ball surface, the direction of motion is tangential (purple arrows). The direction of motion across the border is white-to-black.

The figure below shows the first ground motion on the beach ball surface, split into radial and tangential components. The lengths of the radial and tangential arrows are proportional to the strength of the P and S waves respectively. P-waves generally emanate strongest from the middle of the white and black regions. S-waves emanate strongest from the black-white borders.

 

 

The location of the pressure and tension axes can be confusing. If you look at the S-waves diagram, the tension axis is in the compressional quadrant. However, it does make more sense from the P-waves diagram. The black/white convention can also be counter-intuitive for some. ‘Black’ holes pull things inwards, the sun radiates ‘white’ light outwards, but the beach ball diagram is the opposite of that. I’m sorry I don’t know why this is the convention. Perhaps seismologists are Star Wars fans… Vader wants Luke to come to the dark side, and so this is the movement direction that he is tempted towards… that’s all I got 😊. 

 

Right, but what can I learn about the event mechanism?

 

Even with the beach ball diagram, it can still be hard to interpret the geological or physical mechanism of the event. This is why the moment tensor is often decomposed into its constituent elementary source mechanisms. To decompose the moment tensor, the matrix is rotated to zero the off-diagonal elements. This is just like finding the principal axes of a stress tensor, by zeroing the shear elements and leaving the normal stresses. So, every moment tensor can be expressed as three linear vector dipoles (orthogonal), rotated to a particular orientation. These three dipoles are referred to as the P (pressure), B (or N, neutral or null) and T (tension) principal axes.

Isotropic source

In combination, the three dipoles either result in an overall expansion or a contraction of the source volume. If the source is explosive, the largest dipole direction is the T axis and the smallest dipole is the P axis. These are reversed for an implosive source. Although, for a pure isotropic source the axis orientations have no meaning.

The isotropic component is the portion of the tensor that represents a uniform volume change. Only P-waves radiate from a purely isotropic source. A positive isotropic component is an expansion/explosion. This can be a confined blast or possibly rock bulking. A negative isotropic component is a contraction/implosion. Any pillar burst, buckling or rock ejecting into a void will likely appear as an implosion, given the path of the recorded waves around the void, all first motions will be towards the source.

Deviatoric source

When the isotropic component is removed from the moment tensor, the remainder is the deviatoric component. The deviatoric tensor results in displacement that has zero net volume change, i.e. equal movement in, equal movement out. The underlying geological process to the deviatoric component is a general dislocation of a fault. The general dislocation can be a mix of shear and normal dislocation (although still with no net volume change). To better interpret the relative proportions of shear and normal displacement, the deviatoric component can be decomposed into the DC and CLVD elemental sources.

 

 

Double Couple (DC) source

The DC source is a pure shear dislocation. It is referred to as a double couple because there are two force couples and two (alternate) fault plane orientations that equally model the expected displacement. This notion was discussed in a previous post. The shear direction on the fault is from white-to-black. You can review the orientation of the two planes in relation to your site geology. It may be the case that one of the planes makes more sense than the other or you can find the specific structure.

A pure DC source has two equal and opposite linear vector dipoles. The third dipole is zero (B or null axis). The embedded video shows the direction of first motions from a pure DC source. As mentioned already, motion is inwards for the white regions, outwards for the black regions and tangential across black-white borders. Radial movement radiates P-waves, tangential movement radiates S-waves.

 

Compensated Linear Vector Dipole (CLVD) source

The CLVD source is a normal dislocation on a plane. The normal displacement from one linear vector dipole is ‘compensated’ (hence the name) by opposing displacement from the other two linear vector dipoles so that there is no net volume change.

For a positive CLVD source, a single tensile dipole is compensated by two compressive dipoles.

 

Vice-versa for a negative CLVD source.

 

A pure CLVD source would imply a poisson’s ratio of 0.5, which is more like chewing gum or toothpaste than rock. So there is no geological example of a pure CLVD source. Although, it can make sense as a mixed source event; i.e. part isotropic, part CLVD. This event mechanism may be dominant for confined pillar crushing events. The Hudson chart indicates two key points that are a combination of isotropic and CLVD sources. A single linear vector dipole (other two dipoles are zero) decomposes to a one-third isotropic source, two-thirds CLVD. A pure tensile crack mechanism decomposes to a source 55% explosive, 45% positive CLVD.

The Hudson chart is a useful tool to visualise the moment tensor decomposition, seeing the relative proportions of the isotropic, DC and CLVD elemental sources. The vertical axis is the isotropic component, from -100% (implosion) to 100% (explosion). The horizontal axis is the deviatoric decomposition, from +100% to -100% CLVD, with 100% DC in the centre (0% isotropic, 0% CLVD). The outer border is the 0% DC line.

 

 

Final comments

 

There are many factors that can lead to uncertainty in determining the first motions of waves recorded at sensors and the final moment tensor solution. Seismic waves travelling through the rock mass divert around mining voids and go through numerous refractions, reflections and superimpositions. Noise at the sensor site can also influence the first motion analysis and the solution can also be very sensitive to poor P and S picks. Good moment tensor solutions require a sensor array that is well dispersed, covering the focal sphere in all three dimensions.

Be aware that each moment tensor solution is not going to be of equal quality, particularly small events with few sensors used. Your seismic service provider should provide you with some measure of solution accuracy to help assess this. This might be based on an assessment of the sensor configuration or a misfit analysis between the observed waveforms and the theoretical waveforms generated synthetically from the moment tensor. In general, it is better to look at trends and a convergence of evidence across multiple events rather than a single moment tensor solution. Even if you are investigating a single large event, it is probably worth reviewing the mechanisms of aftershocks and previous events in the area.

It is important not to jump blindly to the nodal plane solutions and to consider the decomposition of the moment tensor in your analysis. If the source is only 5-10% DC, the nodal planes are not very significant. The P, B, T axis are also less important for strongly isotropic sources so keep that in mind for stereonet analysis.

And one last warning about CLVD components. In tests where random noise is added to an initial noise-free, moment tensor inversion of a pure DC source, the noise serves to increase the CLVD component. So it is hard to be sure when CLVD shows up in a solution that it isn’t just noise related. In fact, seismologists often evaluate the accuracy of a moment tensor solution by how large the CLVD component is. A good solution would have a low CLVD component. This is earthquake seismology though so the range of rock mass mechanisms is less diverse than the mining environment, DC is often an assumption for earthquakes.

Anyway, hopefully that clears up at least some of the mystery around moment tensors. Feel free to contact support with any questions. For those looking to read up further I recommend this manual by Dahm and Krüger (2014) and the references therein. They go into much more detail on alternate decompositions and the moment tensor inversion process.

 

Sign up for mXrap blog updates