How to get a specific day and hour for the entire network (snapshot in the past)
The use case is focused on getting the measured speed and flow data for the entire network at a specific date and time in the past. You can therefore understand what happened across the network at that point: this helps you to analyze events and detect anomalies.
First step
The endpoint is:
GET /time-slice/get
Path parameter
Not applicable.
Query parameters
For additional details, see API Reference.
Parameter | Required | Note | Example |
fromTime | YES | Start of the time interval. | 2025-01-02T08:00:00Z |
toTime | NO | End of the time interval. It must not exceed 1 hour after fromTime. If not specified, only data associated to fromTime is returned. | 2025-01-02T08:10:00Z |
fromRow | NO | Index of the first element to return (starts from 1). | 1 |
toRow | NO | Index (not included) of the last element to return. | 10 |
fromRow and toRow can be set to paginate large datasets.
Example of request
GET https://api.ptvgroup.tech/hda/v1/time-slice/get?fromTime=2025-01-02T08:00:00Z&toTime=2025-01-02T08:10:00Z&fromRow=1&toRow=10 HTTP/1.1
Host: api.ptvgroup.tech
Authorization: apiKey YOUR_API_KEY
Accept: application/json
Example of response
{
"metadata": {
"id": "-1996853140",
"mapVersion": "20241112133222",
"createdOn": "2025-01-15T17:50:16.982182087Z",
"totalElements": 13214,
"totalSize": 3118372,
"maxElementsPerRequest": 212765
},
"results": [
{
"index": 1,
"streetCode": "1029074164319670",
"streetIdno": 239600,
"streetFromNode": 198070,
"streetToNode": 16825,
"openLrCode": "Cwi0eR203iOOAAAN/+UjHg==",
"mapVersion": "20241112133222",
"values": [
{
"speed": 33.0,
"fdat": "2025-01-02T08:06:00Z",
"ldat": "2025-01-02T08:06:47Z"
},
{
"speed": 33.0,
"fdat": "2025-01-02T08:00:00Z",
"ldat": "2025-01-02T08:00:46Z"
}
]
},
{
"index": 2,
// Additional street results...
]
}
Second step
On the basis of the response, you can analyze the data snapshot.
The request specifies that the response is made up of 10 rows. In this case, it is easy to recognize in the response, in the structure results, a specific sub-set of information associated with a specific index value.
The response example shows only the data associated to index=1.
Due to potentially large data volumes:
- Consider processing the data in batches, using fromRow and toRow parameters to paginate the data.
- Mind network bandwidth and memory usage when requesting large volumes of data.
- Implement efficient data processing pipelines and consider storing data in scalable storage solutions.
Best practices
About time constraints:
- Ensure that toTime parameter is within 1 hour of fromTime.
- If you need data for a longer period, make multiple requests in 1-hour intervals.
Data formats available are:
- JSON for easy integration and parsing, especially for smaller datasets.
- Apache Parquet suitable for big data processing. Use tools like Apache Spark, Hadoop, or Pandas in Python to process Parquet files.