Our Analysis Tools

We have continued to update our online analysis tools to provide more convenient, comprehensive, and easily understood analysis of your data.

As this ongoing development process continues, use of terms has evolved. Please note the table of terms and calculations below. These describe the usage of terms within the web analysis tools, which may not match with the usage of terms within the app itself.

Please Note: These updated tools utilized the entire output of your exported session, unlike some previous versions used only a portion of the output.


Session Analysis

Session Analysis offers a range of metrics with which to explore your data, including information about event frequency, rate, and relative frequency.

Additionally Interval analyses are performed using the pre-determined Interval for data that was collected using an Interval mode, or a post-hoc Interval length specified at the time of analysis.

A comprehensive Conditional Probability analysis is performed on data with a pre-determined or post-hoc Interval. Each button is compared to every other button as a potential Antecedent or Consequence in a series of concise tables.


Reliability Analysis

Reliability Analysis accepts two sets of data, one called the Reference, the other called the Comparison. These two files are examined for Inter-Observer Agreement (IOA) via multiple methods.

Online Analysis Terms and Calculations

Session Metrics

Observation Duration

Length of time which recording an event was possible

Interval Length

Length of time in seconds for each interval

Per-Button Metrics

Count

Simple frequency

Count/Minute

Count/ (Observation Duration)

Total Duration

Sum of all active time for a Duration button

Duration Proportion

(Total Duration) / (Observation Duration)

Average Duration

(Total Duration) / (Count)

Min Duration

Shortest period of time a Duration button was active

Max Duration

Longest period of time a Duration button was active

Whole Interval

Count of how many intervals in which a Duration button persisted from prior to interval start until after it’s end

Whole Interval Proportion

(Whole Interval) / (Total Duration / Interval Length)

Partial Interval

Count of how many intervals for which a button was active, even for an instant

Partial Interval Proportion

(Partial Interval) / (Total Duration / Interval Length)

Button Comparison Metrics

Count Proportion

(Count) / (Sum of Count for all buttons)

Category Comparison Metrics

Color Count Proportion

Count/(Sum of Count for all buttons of the same color/category)

Button Comparison Conditional Probability

Antecedent Conditional Probability

(Count of ButtonX’s Partial Intervals for which ButtonY has a partial interval in the immediately prior interval) / (Partial Interval of ButtonX)

Antecedent Proportion

(Count of ButtonX’s Partial Intervals for which ButtonY has a partial interval in the immediately prior interval) / (Partial Interval of ButtonY)

Consequence Conditional Probability

(Count of ButtonX’s Partial Intervals for which ButtonY has a partial interval in the immediately following interval) / (Partial Interval of ButtonX)

Consequence Proportion

(Count of ButtonX’s Partial Intervals for which ButtonY has a partial interval in the immediately following interval) / (Partial Interval of ButtonY)

Co-occurance

(Count of ButtonX’s Partial Intervals for which ButtonY has a partial interval in the same interval) / (Partial Interval of ButtonY)

Inter-Observer Summary Agreement

Count Event Reliability

(Count of the Reference Button) / (Count of the Comparison Button)

Duration Reliability

(Total Duration of Reference Button) / (Total Duration of Comparison Button)

Count Interval Reliability

(Partial Interval of Reference Button) / (Partial Interval of Comparison Button)

Inter-Observer Occurrence Matching Agreement

Partial Interval Reliability

(Count of Intervals in which the Reference and Comparison share the same Partial Interval status) / (Total Intervals)

Partial Interval Reliability ϰ

Cohen’s Kappa formula: (Pr(a) – Pr(e)) / (1 – Pr(e)), using the Partial Interval Reliability conditions for Agreement/Disagreement

Whole Interval Reliability

(Count of Intervals in which the Reference and Comparison share the same Whole Interval status) / (Total Intervals)

Whole Interval Reliability ϰ

Cohen’s Kappa formula: (Pr(a) – Pr(e)) / (1 – Pr(e)), using the Partial Interval Reliability conditions for Agreement/Disagreement