2020

Grid Based Analysis parameters

The Grid Based Analysis application can be used to evaluate the spatial distribution of various seismic parameters. There are a range of source parameter options available, and they can give indications to the rock mass behaviour. Some parameters can be considered as a proxy (stand-in) for rock mass stress, while other parameters can be a proxy for the amount of deformation. There are also parameters available that are associated with the rock mass mechanism or event type. In grid-based analysis, a representative value of each seismic parameter is assigned to each grid point based on nearby events. This post summarises the calculation methods and control parameters used to assign each seismic parameter to the grid. Average versus cumulative parameters Most of the source parameters in the grid-based analysis app are what we call average or cumulative parameters. This is a distinction both in the nature of the parameter and the underlying calculation method. For some parameters, particularly those associated with rock mass deformation, it makes sense to find the cumulative effect of each event. Like moment for example, the salient information is how much deformation there has been in a certain area. For other parameters, generally related to stress (like energy index), the cumulative value doesn’t really have much meaning. This is where we are interested in the average value instead. For parameters that scale exponentially, the average is calculated for log10(parameter). The search radius Grid-based analysis is fundamentally about associating nearby events with grid points. The event‑grid association is made within a certain search radius, which is defined differently for average parameters and cumulative parameters. For average parameters, a different search radius is used for each grid point, based on the event density nearby. There are three parameters to control the search radius around each grid point: Rmin. Minimum search radius. All events within this distance from the grid point are assigned, no matter how many there are. Default: 2 x Grid Spacing. Search N. The search radius is expanded until at least these many events are within range. Default: 50 events. Rmax. Maximum search radius. The search radius does not expand beyond this range, even if the Search N has not been reached. Default: 8 x Grid Spacing. A density-based quality check is applied to all grid cells. At least 10 events must be found within a certain threshold distance of the grid point. The default distance is 90 (in native units). If there are less than 10 events within this range, no value is given to that grid point. For cumulative parameters, rather than defining a search radius for each grid point, the search radius is different for each event, depending on the source size (source radius).  No matter what the source radius is for an event, the search radius will not be set below 1.5 times the grid spacing, or above 200 m. The radius is converted to feet if that is the native spatial unit. The kernel function The search radius defines which events are associated with the grid but parameters are weighted depending on how close the event is to the grid point. The kernel function is what defines this distance weighting. The kernel function order defines the shape of the kernel function. Various kernel functions are plotted on the right for kernel orders between 0.3 and 50. The default kernel function order is 3. For average parameters, the inverse-distance weighted mean is calculated using the specified kernel function. For cumulative parameters, the kernel function controls how each event is distributed onto the grid cells. The contribution to each grid point is adjusted to ensure the cumulative parameters are conserved. In other words, if you compute the grid using 100 events or 1kJ of energy, the final result on the grid should also add up to 100 events or 1kJ. Shift Change / Blast response ratio The response ratio for blasts and shift change is based on the event time of day. Blast and shift change periods are defined in General Setup Windows/Grid-based Analysis Settings. The response ratio is the activity rate inside the specified time-of-day periods divided by the activity rate outside those periods. A high blast response ratio, for example, means there are a lot of events associated with blasting, and much less activity throughout the rest of the day. A blast response ratio of below one indicates the activity rate is higher during production periods, which is a sign of ore pass or other production noise dominating the dataset. The search radius for calculating the response ratios are the same as the average parameters; a minimum number of events within a minimum and maximum radius. The difference is that there is no distance weighting applied, all events inside the search distance are treated the same. b-value The event-grid association for the b-value calculations is the same as for the blast and shift change response ratios, i.e. not weighted by distance. A decision metric is used to find the Mmin (magnitude of completeness) for each grid point. Once the Mmin is known, the b-value can be calculated based on the average magnitude above Mmin. The decision metric to find Mmin has three components: Log10(k). k is the number of events above Mmin. This is part of the decision metric so that as many events are found above Mmin as possible, and to push the solution away from the distribution tail, where there are few events and the other parameters are erratic. b. The b-value is part of the decision metric to avoid over-shooting the Mmin. Higher b-values are given more weight since underestimating Mmin would result in a lower b-value. 1 – KS. The KS value is a measure of the goodness of fit between the data and the Gutenberg-Richter model. An example of the decision metric calculation is illustrated in the figure below, along with each of the three components. The metric is calculated for each event, as if it were the Mmin. To speed things up in the grid calculations,

Grid Based Analysis parameters Read More »

Updates to Hazard Assessment app

A few new features have been added to the Hazard Assessment application, aiming to improve usability, understanding and investigation. The first addition is a chart in the hazard setup window to indicate the current date range settings. Usually the date range for calculating b-value will be a lot longer than for calculating event rate. Hopefully the chart will be a handy visual aid to help you keep your bearings when setting the hazard analysis periods. There are also two new windows under the “Hazard Assessment/Advanced Tools” tab. The basic hazard assessment tools are designed to be as simple as possible, requiring minimal user interaction. This makes it easy for new users to get results, the trade-off being there are limited ways to adjust the analysis if required. The new advanced tools will allow users greater control over some of the analysis parameters. Be aware though, the default control parameters have been carefully chosen and there is interaction between different parameters, i.e. changing one parameter may mean others should be tuned also. This should only be done by experienced users familiar with the analysis process. The first new window is for investigating the frequency-magnitude (FM) relationships used in the hazard calculations. You can pick any grid cell to view the frequency magnitude chart of events found in the local area. This is the b-value used for this grid point. The b-value is a very sensitive parameter in hazard calculations, and there are cases where the automatic FM modelling algorithm may not work well. There are several markers to help identify potential areas where the FM model may not represent the data accurately: Mmin – the magnitude of completeness, above which all events are recorded. b-value Standard Error – b/√NMmin. Note: this does not describe the deviation of data from the FM model, but a measure of uncertainty in the b-value. Error at N = 1 – Crude goodness of fit measure, the difference between the largest event and the ‘a/b’ value. Note: it isn’t necessarily an ‘error’ for there to be a difference here. The ‘a/b’ magnitude is the maximum likelihood for the largest event in the FM model, but it has a wide distribution of possible magnitudes. KS Fitness – Kolmogorov-Smirnov goodness of fit measure, describing the probability that the data is a sample of the underlying FM model. The KS value itself is like the p-value in a null hypothesis test, i.e. the probability the data does not follow the distribution. A KS test below 0.05 would be good fit, 0.05-0.1 would be a mediocre fit, and above 0.1 would be a poor fit. A few of the control parameters related to grid-based b-value calculations have been exposed in this window. Users can limit the potential range for the grid Mmin and define the search parameters for assigning events to each grid point: Rmax – The maximum distance to search for events around the grid point. No events outside this range will be used. Limit Search N – All events within the Rmin (Rmin = grid spacing) are assigned to the grid point. After that, increase the search radius until there are this many events or Rmax is reached. If more than this many events have been considered, it means there were more than this many events within Rmin. If less than this many events have been considered, it means Rmax was reached. Min N above Mmin – At least this many events must be above the Mmin of the grid point FM. If there are less events than this, no b-value and no hazard is assigned to this source. There is also an extra quality check to ensure that events are representative of the local source, and not simply a result of an expanded radius into a separate source. A quality cut-off distance is used, where at least 10 events must be within this shorter radius of the grid point. In the same window there is also a global frequency-magnitude chart. This uses all events before the backdate, throughout the whole mine. This shows the Mmin that is used for the event rate calculations. Events well below the global Mmin (Mmin – Δ) are also excluded from the b-value calculations. This is primarily to speed things up. The Δ parameter can be adjusted in the control panel but beware, it will affect the other search parameters. E.g. increasing Δ and including more small events means you will need to increase the search N to ensure you will still get the required number of events above Mmin. The second new window mostly contains tools that already existed in the hazard app, but were a bit hidden away. This window will probably become a bit of an all-in-one window, with the hazard iso’s and minodes shown together. If you pick a minode in the 3D view, the grid point sources that contribute to the hazard at that minode will be plotted, scaled by how much they contribute. Theoretically, every grid point contributes to the hazard at a minode, but there is an accuracy threshold applied with a minimum probability to speed up the calculations. Increasing the accuracy will result in more contributing grid point sources. The PPV-Exceedance Probability chart is also shown in this window. This is useful for understanding the effect of integrating the hazard of multiple minodes together. The probability of exceeding the design PPV increases depending on how many minodes are considered. You can use selection boxes to query any group of minodes you like. The Strong Ground Motion Relationship window has also been upgraded a little. You can now plot the relationship in a couple of different ways: Single Reliability (P) – This is the most traditional way of viewing the SGM relationship. The PPV is plotted against the distance to hypocentre (R) for various magnitudes. This chart uses a single reliability / probability of non-exceedance. Adjust the reliability in the control panel. Note: Probability of exceedance = 1 – Reliability. Single ML – Plot

Updates to Hazard Assessment app Read More »

Deswik survey support

The mXrap survey import tool now supports files saved in Deswik’s VDCL and DCF formats. There is also preliminary support for the Deswik Unified File format (DUF), but currently only for DUFv1. We recommend either VDCL or DCF as the most reliable formats for use with current versions of mXrap. If you’re currently exporting surveys to DXF for use with mXrap, either VDCL or DCF should provide a significant saving in terms of disk space and be faster to load into mXrap’s survey cache. As with DXF, the survey import tool will load 3D faces, lines, and text from the Deswik files. In addition it will also load custom attributes, e.g. user-specified dates, text, or numbers that are attached to objects within the survey file. The survey import tool updates are available from mXrap version 5.10.9 onwards, but please update to version 5.10.12 if you have not already. If you have a VDCL or DCF file that does not import correctly, or you run into other trouble with the survey import tool, get in touch with us at our support email address.

Deswik survey support Read More »

Mine Geometry Model – Minode Generator

Mine Geometry Model Minode Generator is a new utility app which enables you to generate your own new minodes from a mine geometry model. If you’re not sure what minodes are or why you would want to generate them, see What are minodes? (note: that post describes the old procedure for getting updated minodes, but with this app you can update minodes yourself without needing to wait for us). If you’re not sure what mine geometry models are, see Mine Geometry Models Application. The minode generator app operates on the geometry blocks of a single mine geometry model. Because the mine geometry model represents changing geometry over time, a geometry block might start as development at one point in time and then change to another type (e.g. stope) at a later date. The minode generator considers all blocks that are development at any point in time. The app then thins the development blocks to create lines through the centre of the development. At each point in the line it also calculates the effective minode span (the width) and height. Finally it creates minodes along the lines, aiming to balance coverage of the geometry with the total number of minodes. When you are happy with the generated minodes you can choose to either completely replace your existing minodes, or merge the newly generated minodes with the existing minodes (e.g. if your newly generated minodes do not cover all of your development). The newly generated minodes can be tagged with the Mine Geometry Model steps from their source geometry blocks. Just as the step information is used to show live surveys in General Analysis (i.e. geometry present within the current time filters), it can be used to show live minodes in Hazard Assessment. That is, the assessment will only consider and display those minodes representing geometry which is development within the assessment time period: either the development step’s date is within the time period, or it is before the start of the time period and is still effective (it has not been replaced by, e.g. a stope step, in the intervening time). To enable this option in Hazard Assessment, in the Hazard on Excavations window, go to the PPV Controls panel, tick Show advanced controls, and then tick Filter Minodes By Assessment Time. The below video shows an example of changing minodes over time. The minode generator also supports generating minodes for vertical development. This will happen automatically if there is vertical development in the mine geometry model. If you want to exclude vertical development in the Hazard Assessment app, in the Hazard on Excavations window, go to the PPV Controls panel, tick Show advanced controls, and then untick Include Minodes In Vertical Development. You can find the new app under the name Mine Geometry Model Minode Generator, which should be directly beneath the Mine Geometry Models app. If it isn’t there yet, you’ll need a root upgrade to get the latest applications – in that case please contact us at our support email address.

Mine Geometry Model – Minode Generator Read More »

Plane based analysis

The Grid Based Analysis now has a plane based function. The user can select a plane (defined in the plane editor window) and display cumulative or average parameters on that plane. Like the grid, the average and cumulative parameters are calculated differently. For the average parameters, the plane points are treated identically to the grid – the events around each point are found and the average parameter is calculated (the plane is essentially a 2D grid). The cumulative parameters are calculated on a ‘splatter’ basis; where events within a certain distance of the plane are projected onto the plane and their impact is distributed among the plane points within their source radius. Plane based analysis can be used to try to evaluate the change in seismic parameters along a fault plane, or simply to view changes in parameters easily as a slice. The plane can be dynamically overridden, allowing the user to ‘sweep’ through their mine. Several transparency options are available, including transparency based on a single value, based on the value of the parameter of interest or based on the number of events in the area (analysis quality).

Plane based analysis Read More »

Analysing seismicity around large events

A new window has been added to analyse seismicity in the Large Event Analysis application. In a similar approach to the Short-term Response Analysis app, events are assigned to large events based on a spheroid volume and a time range before and after the large event. The chart below shows the energy index of seismicity for 24 hours before and after 156 large events. The time of the large event is indicated by a vertical, dotted black line. If a large event had a blast or another large event in the previous two hours, it was not included. The time of the most recent blast and large event, within the spheroid volume, is listed in the table to be excluded from the analysis if required. There are other chart options available to show individual large events (rather than stacked) or to show different seismic parameters (other than energy index): Cumulative number of events Cumulative seismic moment Cumulative apparent volume Activity rate Apparent stress frequency b-value You can also copy the current seismic events filter into the general analysis window to use the full range of charts and tools. If you would like to arrange a root upgrade to get the latest applications, please contact us at our support email address.

Analysing seismicity around large events Read More »

New hazard charts in General Analysis

Two new charts have been added to the General Analysis application related to assessing hazard with the frequency-magnitude relationship. The new charts plot various hazard parameters over time, or, by time of day:                 Charts / Time Series / Hazard over Time                 Charts / Diurnal / Diurnal Hazard The following parameters can be plotted in each chart (maximum two at a time): Mmin – The magnitude of completeness. The magnitude, above which the dataset is complete. b-value – The slope of the Gutenberg-Richter distribution, describes how the frequency of events scales with magnitude. N at Mref – The number of events (N) above the reference magnitude (Mref, user defined). Note for reference magnitudes less than Mmin, N will not reflect the actual number of events in the database, since it is based on the Gutenberg Richter distribution, assuming a complete dataset. Hazard Probability – The probability of an event exceeding the design magnitude (user defined) within one year. Hazard Magnitude – The magnitude that, to a certain reliability (user defined), won’t be exceeded within one year. Hazard magnitude is essentially the inverse of hazard probability. Each chart is generated by breaking up the data into bins and fitting the Gutenberg-Richter distribution. The bin width can be set in the control panel. Since there can be a lot of variability in the data and fitting procedures, there are also controls to smooth the results, with a user defined bandwidth. The figure below is an example of the Diurnal Hazard chart, showing how the b-value varies based on the time-of-day. The b-value drops from around 1.3 to 0.7 during shift change. This represents a large difference in hazard, which is highly sensitive to b-value (illustrated in previous post). Note that the hazard calculations assume a constant b-value within the analysis volume. This can result in an underestimated hazard (explained in the Hazard Iso’s blog post). For more accurate results, use the hazard assessment application, where the volume is discretised and the probabilities are integrated together from the small scale, to the larger scale. If you would like to arrange a root upgrade to get these charts, let us know at our support email address.

New hazard charts in General Analysis Read More »

Background filters in the hazard app

The new background filters have been added to the Hazard Assessment application. The time of day filter can be used to see the effect of removing events during blasting/shift change on the hazard results. You can either view the results in raw or normalised form. The hazard calculations do normalisation for the event rate calcs anyway, to represent hazard in yearly terms. If your analysis period is six months, the number of events is doubled to represent a year’s worth of events. When applying the time-of-day filter though, the actual analysis period is less than six months, because several hours per day have been removed. Without normalisation, the hazard should always drop when applying the time-of-day filter, because you are removing events, and nothing else changes (i.e. still using 6 months). If normalisation is turned on, the time period that has been removed is accounted for in the hazard calculations. The results then represent accurately the state of the hazard during the relevant times of day. Normalisation also applies to the short-term responses filter, where events can be removed based on a time and distance from a blast or significant event. In this case the normalisation is a bit more complicated. With the time-of-day filter, the effective analysis period is the same for the whole grid. In this case however, there will be an uneven distribution of space and time removed from the analysis. So, each individual cell has its own effective analysis period, based on how many triggers (and responses) are nearby. The idea is still the same though, without normalisation, the hazard will drop due to the removal of events without adjusting the analysis period. With normalisation turned on, the results will represent the hazard state outside of short-term response regions. A new chart has been added to the Hazard Assessment app that shows the effect of different short-term response filtering on hazard. The chart works in a similar way as the Track Volumes over Time chart, by computing the hazard over and over again, automatically changing variables with each run. The chart and associated control panel can be found in the Hazard Assessment / Hazard ISO’s window, under the Response Analysis menu. To generate the chart, you need to specify a maximum response time, a time delta, and response distances (up to 6). The hazard will be calculated for each response distance and for each response time from zero to the maximum (at delta intervals). The hazard recorded is the probability of exceeding the design magnitude within the chosen grid, which is the value displayed in the footer of the 3D ISO view. It can take some time to calculate, depending on how many iterations you specify. The video below shows the chart being generated for response times up to 72 hours and response distances of 50, 100 and 150 m.

Background filters in the hazard app Read More »

Stochastic declustering explained

As mentioned in the last blog post, a stochastic declustering algorithm has been implemented in mXrap to separate events into ‘clustered’ and ‘background’ components. It can be useful when designing seismic exclusions and re-entry procedures to separate seismicity that occurs in short bursts from seismicity that has low variability in space and time. Short-term exclusions cannot be used to manage the risk associated with background seismicity, since the hazard inside a potential exclusion would be the same as outside the exclusion. Efficient exclusion and re-entry procedures target areas where seismicity is most clustered and where the seismic hazard to which people are exposed can be reduced with a short disruption to production. The filter controls for stochastic declustering in General Analysis are in ‘Event Filters / Background Activity’ and a new chart has been added to show the cumulative events of the two components in ‘Charts / Time Series / Declustered CNE’. An example of the cumulative declustered events chart is shown below for a week’s worth of events at the Tasmania mine. In this case approximately 32 % of events have been classified as ‘background’. The declustering is based on the distribution of inter-event times (time between successive events). The distribution (PDF) of inter-event times has been shown to follow the gamma distribution (Corral 2004). The chart below shows how the events in the example above (black crosses) closely follow the gamma distribution (red line).  Hainzl et al. (2006) showed how to estimate the rate of background events from the gamma distribution, based on the mean (µ) and standard deviation (σ). Background Proportion = µ2 / σ2 Background seismicity is generally assumed to be stationary and Poissonian. In other words, the average time between events is constant and known, but the exact timing between events is random. Each event is assumed to be independent and not affect the occurrence of other events. The inter-event time of a Poisson process follows the exponential distribution (green line). The event distribution clearly deviates from the background distribution for small inter-event times. This deviation is caused by the clustered component of seismicity. The distribution of small inter-event times corresponds to the inverse distribution (yellow line), which is explained by sequences that follow the Modified Omori Law (MOL). In this case the slope of the distribution corresponds to the MOL with decay parameter, p ≈ 0.8. The declustering method was described by van Stiphout et al. (2012). The probability that an event is part of the background (purple line) is calculated based on the inter-event time and the ratio between the background and gamma PDF’s. Events with small inter-event times are more likely to be clustered events. Events with large inter-event times are more likely to be background events. It is important to note the random component in the declustering process. Each specific event may be classed as either ‘clustered’ or ‘background’ each time you run the declustering, although the overall proportions will remain the same (hence the ‘stochastic’ in stochastic declustering). There is also no consideration given to the spatial clustering of events, all events are assessed together in the time domain. There is also no consideration given to the magnitude of events. The rate of background events is assumed to be constant although in reality the background rate will slowly vary over time, related to changes in system sensitivity, general rates of extraction and different mining locations. To account for long-term fluctuations in background rate, events are broken down into groups, and the background proportion is computed separately for each group. Groups of events are kept as small as possible, with a minimum number of events and minimum time period (user defined). The background rate is constant within each group. Aside from General Analysis, the stochastic declustering process has been added to the Hazard Assessment, Short-term Response Analysis, and Seismic Monitoring apps. The background filters in the hazard app can be used to compare the seismic hazard of clustered and background seismicity (as per below). Background rates are also calculated for triggers in the short-term responses app and for the reference rate in the activity rate monitoring window. For those wishing to read more about the declustering process, the CORSSA article by van Stiphout et al. (2012) is a good summary of many different approaches used in earthquake seismology, including the method described here.

Stochastic declustering explained Read More »

New background filters

We have added some new event filter options to general analysis related to ‘background’ activity. ‘Background’ events are generally defined as having low variability in space and time. The new background filters aim to identify events that are clustered in space and time and the user can either display the ‘clustered’ or the ‘background’ component of seismicity. There are three ways of classifying clustered events; by time-of-day, by proximity to blasts and significant events, and by a stochastic declustering procedure. Stochastic declustering will be explained in a separate blog post. With the time-of-day filter, you can specify up to five periods of the day, to define increased activity around shift change/blasting times. Times are entered in hours, e.g. 5:30pm = 17.5. Events within these periods will not be shown by default but you can toggle/invert the time-of-day filter to only show events inside the time-of-day periods (and hide events outside). With the short-term responses filter, you can define a time period and spherical radius around blasts and significant events to filter out events. Use the normal blast filter to control which blasts are considered. Significant events are considered if they are within the current base filter, and above the specified magnitude. Note that the significant event itself is not filtered out (it is treated as a background event, not a clustered event). Just like the time-of-day filter, you can toggle/invert the filter to only show the responses, and hide events outside the response windows. The last filter option is an automatic classification system for separating background and clustered events. You can toggle between each component of seismicity defined from stochastic declustering. Watch out for the next blog post if you are interested in the details of how this method works. This is a new addition to the General Analysis application. Find the panel under ‘Event Filters / Background Activity’. If you don’t see this option, you need a root upgrade. Get in touch with support to arrange an upgrade by emailing our support email address.

New background filters Read More »