HASH has a robust analysis system in place, allowing users to gather data from their simulation with metrics, and plot them in numerous different graph types. By defining an agent in your simulation that performs its own data collection and transformation, you can generate more complex metrics which represent weighted combinations of other metrics and datasets. This can unlock additional functionality in your HASH simulations.
In a simulation like Interconnected Call Centers, a modeler’s main goal might be to reduce the number of balked calls that occur during a simulation run. The number of balked calls is already a defined metric, so we can easily spin up an optimization experiment. Running it, we discover that the optimal number of call centers is infinity! After all, the more call centers we initialize, the more calls we can handle.
However, in setting up this optimization, we’ve ignored an important business consideration: the more call centers we use, the higher the costs! We need to create a new metric that accounts for these costs and optimizes within a cost constraint.
To do this, we’ll initialize a new agent whose task is to calculate this more complex metric. We'll follow these steps:
Before you start modifying your simulation, you should define how your complex metric will be calculated. It could be a ratio of two different values, the mean of all agents' specific field, etc. For instance, if the complex metric will be a weighted sum of various data produced by your simulation, you need to determine the values of those weights.
This agent must have a search radius that allows it to see all other agents in the simulation, and should be initialized in the center of the simulation. You can set the state.hidden
field to true
in order to remove it from the 3D Viewer
{
"behaviors": ["calculate_metric.js"],
"position": [0, 0, 0],
"search_radius": 100,
"hidden": true
}
// calculate_metric.js
const behavior = () => {
// Gather data
const num_centers = context.globals().n_call_centers;
const ns = context
.neighbors()
.filter((n) => n.nBalked > 0)
.map((n) => n.nBalked);
const balked_calls = hstd.stats.sum(ns);
// Assign weights to different components of the metric
const w_num_centers = 10;
const w_balked_calls = 0.1;
// Calculate the complex metric
state.metric = w_num_centers * num_centers + w_balked_calls * balked_calls;
};
Metric weights can also be captured as global parameters for easy modification
Now you can go ahead and run your new and improved optimization, which takes into account more realistic cost constraints.
An additional common use case for complex metrics is tuning a model. When we attempt to compare the outputs of our model to external data sources, we can use a complex metric to assess our model’s validity and perform some calibration.
Let's set up a slightly different complex metric:
// gather_data.js
const behavior = () => {
// Gather data from agents
const ns = context.neighbors().map((n) => n.field);
const agent_avg = hstd.stats.mean(ns);
// Gather data from the uploaded dataset
const data_avg = context.data()[""][state.timestep];
// Calculate the error between them
state.error_metric = calc_error(agent_avg, data_avg);
state.cumulative_error_metric += state.error_metric;
};
Sum of squares is the typical method for calculating the error between a sample datapoint and "predicted" or modeled datapoint: Error = Sum of (agent_avg - data_avg)^2
Now that we have the error captured in a metric, we can apply it. You can:
By plotting your model output, and real-world values from the dataset, you should see the optimization experiment producing a well-calibrated model:
Previous
Next