InfluxDB is a time series database written in Go.
Writing Data[edit | edit source]
The schema of a InfluxDB database is based on the very first initial data types added. Adding additional data with a different type will be rejected.
Curl[edit | edit source]
curl, you can POST values to a particular database using the line protocol. For more information, see https://docs.influxdata.com/influxdb/v0.9/write_protocols/line/
$ curl -X POST 'http://influxdb/write?db=lsf' --data-binary 'measurement,tag1=x,tag2=x recorded_value1=x `date +%s"000000000"`'
For multiple entries, write multiple lines separated by a
\n newline and POST with Curl like above.
$ cat /tmp/measurements | curl -X POST 'http://influxdb/write?db=lsf' --data-binary @-'
Deleting Measurement[edit | edit source]
To delete an entire measurement, you must enter the influx command line, set the database and then run
DROP MEASUREMENT. You cannot do this from Chronograf.
# influx -host localhost Connected to http://localhost:8086 version 1.7.10 InfluxDB shell version: 1.7.10 > show databases; nodered > use nodered Using database nodered > drop measurement local_weather; >
Delete a Series[edit | edit source]
To delete a specific series based on tag values, use the
DELETE FROM query from the influx command line.
# influx -host localhost Connected to http://localhost:8086 version 1.7.10 InfluxDB shell version: 1.7.10 > use nodered Using database nodered > DELETE FROM "power" WHERE device = 'AWP04L-1'
Deleting a series will not drop any fields even if the field no longer has any values. See the next section on dropping a field.
Drop a Field[edit | edit source]
There is no capability to drop a field in InfluxDB. There is a feature request for it at https://github.com/influxdata/influxdb/issues/6150.
A work around is to SELECT the fields that you want into a placeholder measurement, drop the original measurement with the extraneous fields, and then re-select everything from the placeholder back into the original measurement.
# influx -host localhost Connected to http://localhost:8086 version 1.7.10 InfluxDB shell version: 1.7.10 > use nodered Using database nodered ## Note the extra field starting with an uppercase > select * FROM power LIMIT 1; name: power time Amperage ApparentPower Factor ReactivePower Voltage Wattage amperage apparentPower device factor reactivePower voltage wattage ---- -------- ------------- ------ ------------- ------- ------- -------- ------------- ------ ------ ------------- ------- ------- 1588742430632718326 0.033 02200194dc4f22137d7f ## Copy desired fields to a placeholder measurement > select time, amperage, apparentPower, device, factor, reactivePower, voltage, wattage INTO demo FROM power group by * name: result time written ---- ------- 0 574590 ## Drop the original measurement > drop measurement power ## Re-insert everything back into the original measurement > select * INTO power FROM demo name: result time written ---- ------- 0 574590 ## Drop the placeholder > drop measurement demo
Because you are re-copying the data again, this might not be a good solution for extremely large measurements.
Show unique tag values[edit | edit source]
To show all tag values that are in a measurement ('jobs' in the example below):
# influx -host localhost Connected to http://localhost:8086 version 1.7.10 InfluxDB shell version: 1.7.10 > use x Using database x > show tag values from jobs with key = partition name: jobs key value --- ----- partition apophis partition apophis-bf partition bigmem partition breezy partition cpu2013 partition cpu2019 partition gpu partition gpu-v100 partition lattice partition parallel
Migrating from Prometheus[edit | edit source]
A proprietary tool might exist. See: https://docs.tibco.com/pub/ftl/5.4.0/doc/html/GUID-BAE2C630-90B2-47FC-A7D6-97AB16065A0C.html
To migrate data from Prometheus to InfluxDB, use the prom2influx migration utility
Ensure that the Prometheus server is running.
Ensure that the InfluxDB server is running.
Run the migration utility. For example, this command migrates all the data points in Prometheus:
The duration of this step depends on the amount of data you migrate, network communication bandwidth and speeds, and other factors. You can migrate a subset of the data by specifying the -start and -range parameters, for example: