Skip to main content

Examples of how you can import data to, or export data from Pigment using the Pigment API and Python.

 

1. Generate Pigment Import/Export API key

https://community.pigment.com/security-permissions-82/manage-api-keys-226

 

2. For Imports into Pigment (to understand the process)

https://community.pigment.com/importing-and-exporting-data-95/how-to-trigger-an-import-with-apis-230

 

3. For Exports from Pigment (to understand the process)

https://community.pigment.com/importing-and-exporting-data-95/how-to-export-data-from-pigment-with-apis-229

 

4. To import data using Python:

import requests
import csv

# Replace with your actual Pigment API key and import ID
API_KEY = 'YOUR_PIGMENT_IMPORT_API_KEY'
IMPORT_ID = 'YOUR_IMPORT_ID'
API_URL = f'https://pigment.app/api/import/push/csv?configurationID={IMPORT_ID}'

# Path to your CSV file
CSV_FILE_PATH = 'yourfile.csv'

# Read the CSV file
with open(CSV_FILE_PATH, 'r') as file:
csv_data = file.read()

# Set up the request headers and payload
headers = {
'Authorization': f'Bearer {API_KEY}',
'Content-Type': 'application/csv'
}
payload = csv_data

# Send the POST request to Pigment API
response = requests.post(API_URL, headers=headers, data=payload)

# Check the response
if response.status_code == 200:
print('Data successfully imported into Pigment.')
else:
print(f'Error importing data to Pigment: {response.status_code} - {response. Text}')

Explanation:

  • API Key and Import ID: Replace YOUR_PIGMENT_IMPORT_API_KEY and YOUR_IMPORT_ID with your actual Pigment API key and import ID.

  • CSV File Path: Specify the path to your CSV file containing the data you want to export, in this example yourfile.csv. This must match the defined import definition, talk to your Pigment team.

  • Read CSV Data: The script reads the CSV file and prepares it for export.

  • Set Up Request: The script sets up the request headers and payload.

  • Send Request: The script sends a POST request to the Pigment API to push the data.

  • Check Response: The script checks the response from the Pigment API and prints a success or error message.

5. To export data using Python:

import requests
import csv

# Replace with your actual Pigment API key and view ID
API_KEY = 'YOUR_PIGMENT_EXPORT_API_KEY'
VIEW_ID = 'YOUR_VIEW_ID'
API_URL = f'https://pigment.app/api/export/view/{VIEW_ID}'

# Set up the request headers
headers = {
'Authorization': f'Bearer {API_KEY}'
}

# Send the GET request to Pigment API
response = requests.get(API_URL, headers=headers)

# Check the response
if response.status_code == 200:
csv_data = response.text
# Save the CSV data to a file
with open('exported_data.csv', 'w', newline='') as file:
writer = csv.writer(file)
for line in csv_data.splitlines():
writer.writerow(line.split(';')) # Adjust delimiter if needed
print('Data successfully exported from Pigment.')
else:
print(f'Error exporting data from Pigment: {response.status_code} - {response. Text}')

Explanation:

  • API Key and View ID: Replace YOUR_PIGMENT_EXPORT_API_KEY and YOUR_VIEW_ID with your actual Pigment API key and view ID.

  • Fetch Data: The script sends a GET request to the Pigment API to fetch the data.

  • Save Data: The script saves the fetched data to a CSV file named exported_data.csv. It splits the data by lines and then by semicolons (;) to handle the CSV format.

5. To export raw data (table) using Python:

There are occasions when the volume of data contained within a view is too large export, in this situation you can this method.

This Community article explains the different types of ‘raw’ exports. Please read this, this is very important in understanding the raw export types.

https://community.pigment.com/importing-and-exporting-data-95/export-raw-data-from-a-pigment-block-1720

Note: This is a far more complex example than the previously shown.

import requests
import json
import logging

# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

# API KEY and API URL of Pigment
API_EXPORT_KEY = 'YOUR_PIGMENT_EXPORT_API_KEY'
API_EXPORT_URL = 'https://pigment.app/api/export/table/'

OUTPUT_FILE = 'exported_data.csv' # CSV file name
CHUNK_SIZE = 10*1024*1024 # 10Mbytes for streaming to reduce POST requests

PIGMENT_METRIC_ID = 'YOUR_PIGMENT_TABLE_ID'

headers = {
'Authorization': f'Bearer {API_EXPORT_KEY}',
'Content-type': 'application/json;charset=utf-8'
}

datareq = {
"friendlyHeaders": True # use the Pigment metric names
}

logging.info(f"Starting export of {PIGMENT_METRIC_ID}...")

try:
with requests.post(API_EXPORT_URL + PIGMENT_TABLE_ID, headers=headers, data=json.dumps(datareq), stream=True) as s:
s.raise_for_status()
with open(OUTPUT_FILE, 'w', encoding="utf-8") as f:
for chunk in s.iter_content(chunk_size=CHUNK_SIZE):
if chunk:
f.write(chunk.decode('utf-8'))
logging.info('Data successfully exported to %s', OUTPUT_FILE)
except requests.exceptions.HTTPError as http_err:
logging.error('HTTP error occurred: %s', http_err)
except Exception as err:
logging.error('An error occurred: %s', err)

Explanation:

  • Import Libraries:

    • requests: For making HTTP requests to the Pigment API.

    • json: For handling JSON data.

    • logging: For logging information and errors.

  • API Key and Import ID: Replace YOUR_PIGMENT_IMPORT_API_KEY and YOUR_PIGMENT_TABLE_ID with your actual Pigment API key and metric ID.

  • Save Data: The script saves the fetched data to a CSV file named exported_data.csv. It splits the data by lines and then by semicolons (;) to handle the CSV format. 

  • datareq: Contains the values from your payload generation step. Please note that Python uses ‘True’ and ‘False’ rather than ‘true’ and ‘false’ for the ‘friendlyHeaders’.

  • Streaming the Response: The stream=True parameter tells the requests library to stream the response content. This is useful for handling large responses that you don't want to load entirely into memory at once. The benefits of streaming are:

    • Memory Efficiency: Streaming allows you to process large amounts of data without loading it all into memory at once, which is crucial for handling big datasets.

    • Performance: By processing data in chunks, you can start working with the data as soon as the first chunk is received, rather than waiting for the entire response to be downloaded.

Please note this example only refers to a Table raw export, if you need to export Lists or Metrics, please read the article above, adjust the payload and Export URL in the Python code.

Be the first to reply!

Reply