General Analysis

HW–FW filter

For a few months now, a new tool has appeared in the General Analysis app; the hanging wall (HW) and footwall (FW) filter. The HW–FW filter allows you to filter your events based on where they are in relation to ticked survey/s. If more than one survey or plane is used for the HW–FW filter, they need to be somewhat parallel in order to make sense. Below is an example of the classification when one survey (FIG 1) and two surveys (FIG 2) are used. The events are categorised into four categories, which can be visualised one at a time or simultaneously:  If mXrap does not use the terms hanging wall and footwall correctly, there is the option to ‘flip’ it. By default, the events are classified using a plane orientation that is automatically determined by averaging the orientation of the triangles in the input survey/s. It is also possible for you to specify the overall plane orientation (dip and dip direction) used in calculations, which may be useful if the automatically determined orientation does not match your expectations. The change of the overall plane orientation will affect the event’s classification (FIG 3). The events are unclassified if they are outside the boundaries of the survey, or if there is a ‘hole’ in the survey. There is the possibility to classify the unclassified events under HW, FW and ore by ticking ‘using nearest vertex instead’. Examples of how the events are classified for the earlier example using the ‘classify outer events by nearest vertex’ way is shown (FIG 4). The volumes created by the HW–FW filter can be saved. A volume will be created for each classification (HW, FW, ore, unclassified) with the defined name, which could be the name of the survey/s used for the classification. These volumes will automatically appear in the VSA table (FIG 5). If your surveys have dense mesh, consider using the ‘simplify mesh’ option as it will speed up the calculations process for the exported filter volumes. For now, these filters are used for events. However, the same tool can be applied to other data. That classification tool can be used with multiple surveys simultaneously. It can further be applied to different data types, such as structures, rock mass classification or intact rock tests. Stay tuned for further tips and training!

HW–FW filter Read More »

Energy – moment relationship

Energy and moment are two independent measures of the strength of a seismic event. Their physical meaning and how they are calculated was described in a previous blog post. Analysis of the relationship between the energy and moment of events can provide insight into seismic sources. For example, blasts or ore pass noise, falsely processed as real events, tend to have distinct zones on an energy-moment chart. In general, events with higher-than-average energy are associated with high relative stress. Energy index is a parameter used to estimate effective stress. To calculate energy index, the mean energy-moment relationship must be defined. Energy index is the log difference in energy from the mean energy-moment relationship. When comparing energy index in different software or from separate sites, it is important to note that if the energy-moment relationship is not the same, the energy index will not be consistent. The most common method of fitting a linear relationship between two variables is known as least squares regression (LSR). This method essentially minimizes the vertical (Y-axis) difference between the data points and the line of best fit. For the energy-moment case, this would be minimizing the energy difference. LSR is designed for cases where the independent variable (X-axis) is known perfectly (zero error) and the error is only associated with the dependent variable (Y-axis). This is not suitable for the energy-moment case as there is uncertainty in both the energy and moment parameters. The uncertainty in moment and the uncertainty in energy are also generally not the same scale. There are several linear regression methods that account for uncertainty in both parameters. Orthogonal regression minimizes the perpendicular difference between the data and best fit line, assuming a constant ratio between the X and Y variances. There is also a method known as weighted least squares which does not assume a correlation between the uncertainty of the two variables. A less complicated approach is to use the quantile-quantile (QQ) plot of the data. This plots the smallest energy against the smallest moment, the second smallest energy against the second smallest moment, etc. This approach has the effect of normalizing the different scale variances of each parameter. The ordinary LSR method can then be applied to the QQ data to obtain an accurate line of best fit. This is equivalent to the orthogonal regression method. The figure below shows the difference between the QQ fit and the least squares fit of the energy-moment data at the Tasmania mine. The QQ fit is a better match to the highest point density zones. The poor LSR fit is because the variance in energy is higher than the variance in moment. The distribution of the energy and moment departure indices is plotted below. Both distributions are slightly asymmetrical, likely due to the various seismic mechanisms superimposed. The wider variance in the energy index introduces a bias that has the effect of making a shallower least squares fit as it tries to minimise the vertical departure in the centre of the chart. If you are doing your own energy index calculations, or using different software, you should be aware of the method used to define the energy-moment relationship and the linear constants used. The least squares approach or chi-square regression should be avoided. The QQ based method is the approach used in mXrap and you can find the fitted linear equation in the footer of the energy-moment chart. You might see different parameters for “global” energy index and “local” energy index. This distinction comes from the different energy-moment relationships used. The global EI relationship is based on all events that pass the quality filter. The local EI relationship is based on events that pass the current base filter (volumetric, parameter ranges etc.).

Energy – moment relationship Read More »

New hazard charts in General Analysis

Two new charts have been added to the General Analysis application related to assessing hazard with the frequency-magnitude relationship. The new charts plot various hazard parameters over time, or, by time of day:                 Charts / Time Series / Hazard over Time                 Charts / Diurnal / Diurnal Hazard The following parameters can be plotted in each chart (maximum two at a time): Mmin – The magnitude of completeness. The magnitude, above which the dataset is complete. b-value – The slope of the Gutenberg-Richter distribution, describes how the frequency of events scales with magnitude. N at Mref – The number of events (N) above the reference magnitude (Mref, user defined). Note for reference magnitudes less than Mmin, N will not reflect the actual number of events in the database, since it is based on the Gutenberg Richter distribution, assuming a complete dataset. Hazard Probability – The probability of an event exceeding the design magnitude (user defined) within one year. Hazard Magnitude – The magnitude that, to a certain reliability (user defined), won’t be exceeded within one year. Hazard magnitude is essentially the inverse of hazard probability. Each chart is generated by breaking up the data into bins and fitting the Gutenberg-Richter distribution. The bin width can be set in the control panel. Since there can be a lot of variability in the data and fitting procedures, there are also controls to smooth the results, with a user defined bandwidth. The figure below is an example of the Diurnal Hazard chart, showing how the b-value varies based on the time-of-day. The b-value drops from around 1.3 to 0.7 during shift change. This represents a large difference in hazard, which is highly sensitive to b-value (illustrated in previous post). Note that the hazard calculations assume a constant b-value within the analysis volume. This can result in an underestimated hazard (explained in the Hazard Iso’s blog post). For more accurate results, use the hazard assessment application, where the volume is discretised and the probabilities are integrated together from the small scale, to the larger scale. If you would like to arrange a root upgrade to get these charts, let us know at our support email address.

New hazard charts in General Analysis Read More »

Stochastic declustering explained

As mentioned in the last blog post, a stochastic declustering algorithm has been implemented in mXrap to separate events into ‘clustered’ and ‘background’ components. It can be useful when designing seismic exclusions and re-entry procedures to separate seismicity that occurs in short bursts from seismicity that has low variability in space and time. Short-term exclusions cannot be used to manage the risk associated with background seismicity, since the hazard inside a potential exclusion would be the same as outside the exclusion. Efficient exclusion and re-entry procedures target areas where seismicity is most clustered and where the seismic hazard to which people are exposed can be reduced with a short disruption to production. The filter controls for stochastic declustering in General Analysis are in ‘Event Filters / Background Activity’ and a new chart has been added to show the cumulative events of the two components in ‘Charts / Time Series / Declustered CNE’. An example of the cumulative declustered events chart is shown below for a week’s worth of events at the Tasmania mine. In this case approximately 32 % of events have been classified as ‘background’. The declustering is based on the distribution of inter-event times (time between successive events). The distribution (PDF) of inter-event times has been shown to follow the gamma distribution (Corral 2004). The chart below shows how the events in the example above (black crosses) closely follow the gamma distribution (red line).  Hainzl et al. (2006) showed how to estimate the rate of background events from the gamma distribution, based on the mean (µ) and standard deviation (σ). Background Proportion = µ2 / σ2 Background seismicity is generally assumed to be stationary and Poissonian. In other words, the average time between events is constant and known, but the exact timing between events is random. Each event is assumed to be independent and not affect the occurrence of other events. The inter-event time of a Poisson process follows the exponential distribution (green line). The event distribution clearly deviates from the background distribution for small inter-event times. This deviation is caused by the clustered component of seismicity. The distribution of small inter-event times corresponds to the inverse distribution (yellow line), which is explained by sequences that follow the Modified Omori Law (MOL). In this case the slope of the distribution corresponds to the MOL with decay parameter, p ≈ 0.8. The declustering method was described by van Stiphout et al. (2012). The probability that an event is part of the background (purple line) is calculated based on the inter-event time and the ratio between the background and gamma PDF’s. Events with small inter-event times are more likely to be clustered events. Events with large inter-event times are more likely to be background events. It is important to note the random component in the declustering process. Each specific event may be classed as either ‘clustered’ or ‘background’ each time you run the declustering, although the overall proportions will remain the same (hence the ‘stochastic’ in stochastic declustering). There is also no consideration given to the spatial clustering of events, all events are assessed together in the time domain. There is also no consideration given to the magnitude of events. The rate of background events is assumed to be constant although in reality the background rate will slowly vary over time, related to changes in system sensitivity, general rates of extraction and different mining locations. To account for long-term fluctuations in background rate, events are broken down into groups, and the background proportion is computed separately for each group. Groups of events are kept as small as possible, with a minimum number of events and minimum time period (user defined). The background rate is constant within each group. Aside from General Analysis, the stochastic declustering process has been added to the Hazard Assessment, Short-term Response Analysis, and Seismic Monitoring apps. The background filters in the hazard app can be used to compare the seismic hazard of clustered and background seismicity (as per below). Background rates are also calculated for triggers in the short-term responses app and for the reference rate in the activity rate monitoring window. For those wishing to read more about the declustering process, the CORSSA article by van Stiphout et al. (2012) is a good summary of many different approaches used in earthquake seismology, including the method described here.

Stochastic declustering explained Read More »

New background filters

We have added some new event filter options to general analysis related to ‘background’ activity. ‘Background’ events are generally defined as having low variability in space and time. The new background filters aim to identify events that are clustered in space and time and the user can either display the ‘clustered’ or the ‘background’ component of seismicity. There are three ways of classifying clustered events; by time-of-day, by proximity to blasts and significant events, and by a stochastic declustering procedure. Stochastic declustering will be explained in a separate blog post. With the time-of-day filter, you can specify up to five periods of the day, to define increased activity around shift change/blasting times. Times are entered in hours, e.g. 5:30pm = 17.5. Events within these periods will not be shown by default but you can toggle/invert the time-of-day filter to only show events inside the time-of-day periods (and hide events outside). With the short-term responses filter, you can define a time period and spherical radius around blasts and significant events to filter out events. Use the normal blast filter to control which blasts are considered. Significant events are considered if they are within the current base filter, and above the specified magnitude. Note that the significant event itself is not filtered out (it is treated as a background event, not a clustered event). Just like the time-of-day filter, you can toggle/invert the filter to only show the responses, and hide events outside the response windows. The last filter option is an automatic classification system for separating background and clustered events. You can toggle between each component of seismicity defined from stochastic declustering. Watch out for the next blog post if you are interested in the details of how this method works. This is a new addition to the General Analysis application. Find the panel under ‘Event Filters / Background Activity’. If you don’t see this option, you need a root upgrade. Get in touch with support to arrange an upgrade by emailing our support email address.

New background filters Read More »

Moment tensors – a practical guide

Moment tensor analysis is a topic that carries a decent level of uncertainty and confusion for many people. So I’m going to lay it out as simply as I can. For this post, I’m not going to go into too many details on how moment tensors are actually calculated. But, I’m going to summarise the things I think are most important for geotechnical engineers to know for interpreting moment tensor results. OK, so, what is a moment tensor? A moment tensor is a representation of the source of a seismic event. The stress tensor and the moment tensor are very similar ideas. Much as a stress tensor describes the state of stress at a particular point, a moment tensor describes the deformation at the source location that generates seismic waves. You can see the similarity between the stress and moment tensors in the figure below. The moment tensor describes the deformation at the source based on generalised force couples, arranged in a 3 x 3 matrix. Although, the matrix is symmetric so there are only six independent elements (i.e. M12 = M21). The diagonal elements (e.g. M11) are called linear vector dipoles. These are equivalent to the normal stresses in a stress tensor. The off-diagonal elements, are moments defined by force couples (moments and force couples discussed in previous blog post). Producing a moment tensor of a seismic event requires the Green’s function. This function computes the ground displacement recorded by the seismic sensor based on a known moment tensor (the forwards problem). A moment tensor inversion is when the inverse Green’s function is used to find the source moment tensor based on sensor data. Sure… but what’s with the beach balls? It’s pretty hard to interpret a 3 x 3 matrix of numbers, so moment tensors are usually displayed as beach balls, either 2D or 3D. I will mostly discuss the 3D case; the 2D diagram is just a stereonet projection of the 3D beach ball. The construction of a beach ball diagram is very simple. For each point on the surface of a sphere, the moment tensor describes the magnitude and direction of the first motion. If the direction of motion is inwards, towards the source, the surface is coloured white (red arrows). If the direction of motion is outwards, away from the source, the surface is coloured black (blue arrows). Where there is a border between black and white on the beach ball surface, the direction of motion is tangential (purple arrows). The direction of motion across the border is white-to-black. The figure below shows the first ground motion on the beach ball surface, split into radial and tangential components. The lengths of the radial and tangential arrows are proportional to the strength of the P and S waves respectively. P-waves generally emanate strongest from the middle of the white and black regions. S-waves emanate strongest from the black-white borders. The location of the pressure and tension axes can be confusing. If you look at the S-waves diagram, the tension axis is in the compressional quadrant. However, it does make more sense from the P-waves diagram. The black/white convention can also be counter-intuitive for some. ‘Black’ holes pull things inwards, the sun radiates ‘white’ light outwards, but the beach ball diagram is the opposite of that. I’m sorry I don’t know why this is the convention. Perhaps seismologists are Star Wars fans… Vader wants Luke to come to the dark side, and so this is the movement direction that he is tempted towards… that’s all I got ?. Right, but what can I learn about the event mechanism? Even with the beach ball diagram, it can still be hard to interpret the geological or physical mechanism of the event. This is why the moment tensor is often decomposed into its constituent elementary source mechanisms. To decompose the moment tensor, the matrix is rotated to zero the off-diagonal elements. This is just like finding the principal axes of a stress tensor, by zeroing the shear elements and leaving the normal stresses. So, every moment tensor can be expressed as three linear vector dipoles (orthogonal), rotated to a particular orientation. These three dipoles are referred to as the P (pressure), B (or N, neutral or null) and T (tension) principal axes. Isotropic source In combination, the three dipoles either result in an overall expansion or a contraction of the source volume. If the source is explosive, the largest dipole direction is the T axis and the smallest dipole is the P axis. These are reversed for an implosive source. Although, for a pure isotropic source the axis orientations have no meaning. The isotropic component is the portion of the tensor that represents a uniform volume change. Only P-waves radiate from a purely isotropic source. A positive isotropic component is an expansion/explosion. This can be a confined blast or possibly rock bulking. A negative isotropic component is a contraction/implosion. Any pillar burst, buckling or rock ejecting into a void will likely appear as an implosion, given the path of the recorded waves around the void, all first motions will be towards the source. Deviatoric source When the isotropic component is removed from the moment tensor, the remainder is the deviatoric component. The deviatoric tensor results in displacement that has zero net volume change, i.e. equal movement in, equal movement out. The underlying geological process to the deviatoric component is a general dislocation of a fault. The general dislocation can be a mix of shear and normal dislocation (although still with no net volume change). To better interpret the relative proportions of shear and normal displacement, the deviatoric component can be decomposed into the DC and CLVD elemental sources. Double Couple (DC) source The DC source is a pure shear dislocation. It is referred to as a double couple because there are two force couples and two (alternate) fault plane orientations that equally model the expected displacement. This notion was discussed in a previous post. The shear direction on the

Moment tensors – a practical guide Read More »

Moment tensors in General Analysis app

Moment tensors have been added to the General Analysis application in the recent update. Beach balls and principal axes can be viewed in the General Analysis 3D view. There is also a separate Moment Tensor window with a number of stereonets and mechanism charts. Two new training videos have been uploaded to the General Analysis (3D View) page that walkthrough the new tools. IMS and ESG sites should have moment tensors loaded in with the events table automatically.

Moment tensors in General Analysis app Read More »

To a/b, or not to a/b

The a/b value is sometimes used as a measure of seismic hazard but there are some common mistakes made with this analysis and interpretation. What is a/b? The Gutenberg-Richter distribution is a statistical model that describes a log-linear relationship between the number of events, N, exceeding magnitude, M.  log10 N = a – bM At N = 1, M = a/b. The figure below shows an example of a frequency-magnitude chart with the a/b value highlighted. Does a/b mean anything? It is important to distinguish between properties of the dataset and properties of the statistical model. The a/b value is a property of the Gutenberg-Richter statistical model but it is defined at a particular data point (N = 1). The a/b value does have some meaning, but that’s really only because the a and b value both mean something (although I’ll come back to the a-value later). In terms of seismic hazard, the activity rate and b-value are the two primary inputs required. The focus on the magnitude where N = 1 is somewhat arbitrary. The statistical model describes the relative frequency for all magnitudes. It is just as valid to normalise the frequency axis to a percentage i.e. express N as a percentage of the number of events at M = Mmin. So in the figure below, at Mmin, the frequency is 100% and events over M = 1 represent 0.1% of all events over Mmin. Note the a/b magnitude represents approx 0.006% of events. So the magnitude at N = 1 loses its significance. Asking what is the significance of a/b is like asking the significance of the magnitude of the top 0.1% of events? Why not the top 0.01% or 0.001%? The normalisation trap (or the non-normalisation trap) The reason the a/b value doesn’t mean much for seismic hazard is because the a-value by itself is meaningless. The number of events, by itself, doesn’t tell you anything about hazard because it has no associated time and space units. It should be pretty easy to understand the importance of normalisation to regular time and space units. If I tell you there has been 100 events, you don’t know anything about what seismic hazard that represents. It could be 100 events in a very small volume, in a very small time period; this would be a high hazard. It could be 100 events in a very large volume over a very long time period; this would be a low hazard. So the important thing for seismic hazard estimates is the event rate density, i.e. the number of events, per unit time, per unit volume. Only then can you compare apples with apples. One final point. A constant event rate density, and a constant b-value over time represents a constant hazard state. The problem is that the a/b value without normalisation is entirely dependent on how long you have recorded this constant hazard state. The total number of events (i.e. the a-value) continuously grows and so does the a/b value, even though the hazard state is not changing. This is why without normalisation, the a/b is not a measure of hazard. If you normalise the event count based on the event rate density and a standard time and volume, the a/b value can be a measure of hazard. However, in terms of probabilistic seismic hazard, the probability that the largest event in the database will exceed the a/b value is ≈ 63%, assuming an open-ended Gutenberg-Richter distribution or a very high MUL (MUL >> a/b). Conclusions The a/b value is a property of the Gutenberg-Richter model, not of the dataset There is no special significance to the magnitude where the Gutenberg-Richter model crosses N = 1 The a/b value is a function of the number of events Without space and time information, the a/b value (and the a-value) are not indicative of hazard When comparing different times and zones using a/b, you must normalise using the event rate density and a standard time and volume The probability of the largest event exceeding a/b is ≈ 63%

To a/b, or not to a/b Read More »

Frequency-magnitude chart anatomy

When you are using the frequency-magnitude chart, it can be easy to forget it is log scale and this can distort a few things. Consider the chart below; have you ever thought the Gutenberg-Richter distribution doesn’t look right? Think it isn’t matching the large events very well? The Gutenberg-Richter distribution is a statistical model of the data. Consider what the chart looks like in linear scale rather than log scale. The difference at the tail of the distribution (largest events) seems much less significant right? The other interesting point is the relative proportion of events above and below the Mmin. There is roughly only 20% of events in you database that are above the magnitude of completeness. Obviously, in linear scale, you can’t see what’s happening at the tail very well. That’s why we use the log scale in the first place 🙂

Frequency-magnitude chart anatomy Read More »

Event tags and comments

There are many reasons you might want to store a short snippet of text associated with an event. There are two ways to do this in mXrap; event tags and event comments. Event tags can be used to group events into categories. Example tags might be ‘suspected blast’, ‘damage occurred’, ‘suspect location’, ‘outlier’ or ‘likely crusher noise’. These tags can be used in event filters to quickly show or hide particular categories. Event comments are a second option to assign user text to events. Each event comment can be unique and about anything. They have no effect on event filters. You can watch videos on ‘Event tags’ and ‘Event comments’”. Both event tags and comments are shown in the main events table in General Analysis. The event tags system has been modified recently. If your mXrap looks different to the video, you might need a root update. This process is now quick and easy with mXsync. We just need 5-10 minutes to connect via TeamViewer/Webex/GoTo Meeting. Contact our support email address for assistance.

Event tags and comments Read More »