Scripts

Fetch Pricing Datasets from IEX Cloud and Tradier

Fetch new pricing datasets for a one or many tickers at once or pull screeners from IEX Cloud (https://iexcloud.io), Tradier (https://tradier.com/) and FinViz (https://finviz.com/)

  1. Fetch pricing data
  2. Publish pricing data to Redis and Minio

Examples

Fetch Intraday Minute Pricing Data

fetch -t QQQ -g min

Fetch Intraday Option Chains for Calls and Puts

fetch -t QQQ -g td

Fetch Intraday News, Minute and Options

fetch -t QQQ -g news,min,td

Debugging

Turn on verbose debugging with the -d argument:

fetch -t QQQ -g min -d
analysis_engine.scripts.fetch_new_stock_datasets.fetch_new_stock_datasets()[source]

Collect datasets for a ticker from IEX Cloud or Tradier

Setup

export IEX_TOKEN=YOUR_IEX_CLOUD_TOKEN
export TD_TOKEN=YOUR_TRADIER_TOKEN

Pull Data for a Ticker from IEX and Tradier

fetch -t TICKER

Pull from All Supported IEX Feeds

fetch -t TICKER -g iex-all

Pull from All Supported Tradier Feeds

fetch -t TICKER -g td

Intraday IEX and Tradier Feeds (only minute and news to reduce costs)

fetch -t TICKER -g intra
# or manually:
# fetch -t TICKER -g td,iex_min,iex_news

Daily IEX Feeds (daily and news)

fetch -t TICKER -g daily
# or manually:
# fetch -t TICKER -g iex_day,iex_news

Weekly IEX Feeds (company, financials, earnings, dividends, and peers)

fetch -t TICKER -g weekly
# or manually:
# fetch -t TICKER -g iex_fin,iex_earn,iex_div,iex_peers,iex_news,
# iex_comp

IEX Minute

fetch -t TICKER -g iex_min

IEX News

fetch -t TICKER -g iex_news

IEX Daily

fetch -t TICKER -g iex_day

IEX Stats

fetch -t TICKER -g iex_stats

IEX Peers

fetch -t TICKER -g iex_peers

IEX Financials

fetch -t TICKER -g iex_fin

IEX Earnings

fetch -t TICKER -g iex_earn

IEX Dividends

fetch -t TICKER -g iex_div

IEX Quote

fetch -t TICKER -g iex_quote

IEX Company

fetch -t TICKER -g iex_comp

Note

This requires the following services are listening on:

  • redis localhost:6379
  • minio localhost:9000

Backtest an Algorithm and Plot the Trading History

A tool for showing how to build an algorithm and run a backtest with an algorithm config dictionary

import analysis_engine.consts as ae_consts
import analysis_engine.algo as base_algo
import analysis_engine.run_algo as run_algo

ticker = 'SPY'

willr_close_path = (
    'analysis_engine/mocks/example_indicator_williamsr.py')
willr_open_path = (
    'analysis_engine/mocks/example_indicator_williamsr_open.py')
algo_config_dict = {
    'name': 'min-runner',
    'timeseries': timeseries,
    'trade_horizon': 5,
    'num_owned': 10,
    'buy_shares': 10,
    'balance': 10000.0,
    'commission': 6.0,
    'ticker': ticker,
    'algo_module_path': None,
    'algo_version': 1,
    'verbose': False,               # log in the algorithm
    'verbose_processor': False,     # log in the indicator processor
    'verbose_indicators': False,    # log all indicators
    'verbose_trading': True,        # log in the algo trading methods
    'positions': {
        ticker: {
            'shares': 10,
            'buys': [],
            'sells': []
        }
    },
    'buy_rules': {
        'confidence': 75,
        'min_indicators': 3
    },
    'sell_rules': {
        'confidence': 75,
        'min_indicators': 3
    },
    'indicators': [
        {
            'name': 'willr_-70_-30',
            'module_path': willr_close_path,
            'category': 'technical',
            'type': 'momentum',
            'uses_data': 'minute',
            'high': 0,
            'low': 0,
            'close': 0,
            'open': 0,
            'willr_value': 0,
            'num_points': 80,
            'buy_below': -70,
            'sell_above': -30,
            'is_buy': False,
            'is_sell': False,
            'verbose': False  # log in just this indicator
        },
        {
            'name': 'willr_-80_-20',
            'module_path': willr_close_path,
            'category': 'technical',
            'type': 'momentum',
            'uses_data': 'minute',
            'high': 0,
            'low': 0,
            'close': 0,
            'open': 0,
            'willr_value': 0,
            'num_points': 30,
            'buy_below': -80,
            'sell_above': -20,
            'is_buy': False,
            'is_sell': False
        },
        {
            'name': 'willr_-90_-10',
            'module_path': willr_close_path,
            'category': 'technical',
            'type': 'momentum',
            'uses_data': 'minute',
            'high': 0,
            'low': 0,
            'close': 0,
            'open': 0,
            'willr_value': 0,
            'num_points': 60,
            'buy_below': -90,
            'sell_above': -10,
            'is_buy': False,
            'is_sell': False
        },
        {
            'name': 'willr_open_-80_-20',
            'module_path': willr_open_path,
            'category': 'technical',
            'type': 'momentum',
            'uses_data': 'minute',
            'high': 0,
            'low': 0,
            'close': 0,
            'open': 0,
            'willr_open_value': 0,
            'num_points': 80,
            'buy_below': -80,
            'sell_above': -20,
            'is_buy': False,
            'is_sell': False
        }
    ],
    'slack': {
        'webhook': None
    }
}

class ExampleCustomAlgo(base_algo.BaseAlgo):
    def process(self, algo_id, ticker, dataset):
        if self.verbose:
            print(
                f'process start - {self.name} '
                f'date={self.backtest_date} minute={self.latest_min} '
                f'close={self.latest_close} high={self.latest_high} '
                f'low={self.latest_low} open={self.latest_open} '
                f'volume={self.latest_volume}')
    # end of process
# end of ExampleCustomAlgo


algo_obj = ExampleCustomAlgo(
    ticker=algo_config_dict['ticker'],
    config_dict=algo_config_dict)

algo_res = run_algo.run_algo(
    ticker=algo_config_dict['ticker'],
    algo=algo_obj,
    raise_on_err=True)

if algo_res['status'] != ae_consts.SUCCESS:
    print(
        'failed running algo backtest '
        f'{algo_obj.get_name()} hit status: '
        f'{ae_consts.get_status(status=algo_res['status'])} '
        f'error: {algo_res["err"]}')
else:
    print(
        f'backtest: {algo_obj.get_name()} '
        f'{ae_consts.get_status(status=algo_res["status"])} - '
        'plotting history')
# if not successful
analysis_engine.scripts.run_backtest_and_plot_history.build_example_algo_config(ticker, timeseries='minute')[source]

helper for building an algorithm config dictionary

Returns:algorithm config dictionary
class analysis_engine.scripts.run_backtest_and_plot_history.ExampleCustomAlgo(ticker=None, balance=5000.0, commission=6.0, tickers=None, name=None, use_key=None, auto_fill=True, version=1, config_file=None, config_dict=None, output_dir=None, publish_to_slack=False, publish_to_s3=False, publish_to_redis=False, publish_input=True, publish_history=True, publish_report=True, load_from_s3_bucket=None, load_from_s3_key=None, load_from_redis_key=None, load_from_file=None, load_compress=False, load_publish=True, load_config=None, report_redis_key=None, report_s3_bucket=None, report_s3_key=None, report_file=None, report_compress=False, report_publish=True, report_config=None, history_redis_key=None, history_s3_bucket=None, history_s3_key=None, history_file=None, history_compress=False, history_publish=True, history_config=None, extract_redis_key=None, extract_s3_bucket=None, extract_s3_key=None, extract_file=None, extract_save_dir=None, extract_compress=False, extract_publish=True, extract_config=None, dataset_type=20000, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'], timeseries=None, trade_strategy=None, verbose=False, verbose_processor=False, verbose_indicators=False, verbose_trading=False, verbose_load=False, verbose_extract=False, verbose_history=False, verbose_report=False, inspect_datasets=False, raise_on_err=True, **kwargs)[source]
process(algo_id, ticker, dataset)[source]

Run a custom algorithm after all the indicators from the algo_config_dict have been processed and all the number crunching is done. This allows the algorithm class to focus on the high-level trade execution problems like bid-ask spreads and opening the buy/sell trade orders.

How does it work?

The engine provides a data stream from the latest pricing updates stored in redis. Once new data is stored in redis, algorithms will be able to use each dataset as a chance to evaluate buy and sell decisions. These are your own custom logic for trading based off what the indicators find and any non-indicator data provided from within the dataset dictionary.

Dataset Dictionary Structure

Here is what the dataset variable looks like when your algorithm’s process method is called (assuming you have redis running with actual pricing data too):

dataset = {
    'id': dataset_id,
    'date': date,
    'data': {
        'daily': pd.DataFrame([]),
        'minute': pd.DataFrame([]),
        'quote': pd.DataFrame([]),
        'stats': pd.DataFrame([]),
        'peers': pd.DataFrame([]),
        'news1': pd.DataFrame([]),
        'financials': pd.DataFrame([]),
        'earnings': pd.DataFrame([]),
        'dividends': pd.DataFrame([]),
        'calls': pd.DataFrame([]),
        'puts': pd.DataFrame([]),
        'pricing': pd.DataFrame([]),
        'news': pd.DataFrame([])
    }
}

Tip

you can also inspect these datasets by setting the algorithm’s config dictionary key "inspect_datasets": True

Parameters:
  • algo_id – string - algo identifier label for debugging datasets during specific dates
  • ticker – string - ticker
  • dataset – a dictionary of identifiers (for debugging) and multiple pandas DataFrame objects.
analysis_engine.scripts.run_backtest_and_plot_history.run_backtest_and_plot_history(config_dict)[source]

Run a derived algorithm with an algorithm config dictionary

Parameters:config_dict – algorithm config dictionary

Plot the Trading History from a File on Disk

A tool for plotting an algorithm’s Trading History from a locally saved file from running the backtester with the save to file option enabled:

run_backtest_and_plot_history.py -t SPY -f <SAVE_HISTORY_TO_THIS_FILE>
analysis_engine.scripts.plot_history_from_local_file.plot_local_history_file()[source]

Run a derived algorithm with an algorithm config dictionary

Parameters:config_dict – algorithm config dictionary

Publish Stock Data from S3 to Redis

Publish Stock Data from S3 to Redis

Publish stock data in an s3 key to redis

Publish the contents of an S3 key to a Redis key

Steps:

  1. Parse arguments
  2. Download pricing data as a Celery task
  3. Publish pricing data as a Celery tasks
  4. Coming Soon - Start buy/sell analysis as Celery task(s)
analysis_engine.scripts.publish_from_s3_to_redis.publish_from_s3_to_redis()[source]

Download an S3 key and publish it’s contents to Redis

Run Aggregate and then Publish data for a Ticker from S3 to Redis

Publish stock data in an s3 key to redis

Publish the aggregated S3 contents of a ticker to a Redis key and back to S3

Steps:

  1. Parse arguments
  2. Download and aggregate ticker data from S3 as a Celery task
  3. Publish aggregated data to S3 as a Celery task
  4. Publish aggregated data to Redis as a Celery task
analysis_engine.scripts.publish_ticker_aggregate_from_s3.publish_ticker_aggregate_from_s3()[source]

Download all ticker data from S3 and publish it’s contents to Redis and back to S3

Stock Analysis Command Line Tool

This tool is for preparing, analyzing and using datasets to run predictions using the tensorflow and keras.

Stock Analysis Command Line Tool

  1. Get an algorithm-ready dataset
  • Fetch and extract algorithm-ready datasets
  • Optional - Preparing a dataset from s3 or redis. A prepared dataset can be used for analysis.
  1. Run an algorithm using the cached datasets
  • Coming Soon - Analyze datasets and store output (generated csvs) in s3 and redis.
  • Coming Soon - Make predictions using an analyzed dataset

Supported Actions

  1. Algorithm-Ready Datasets

    Algo-ready datasets were created by the Algorithm Extraction API.

    You can tune algorithm performance by deriving your own algorithm from the analysis_engine.algo.BaseAlgo and then loading the dataset from s3, redis or a file by passing the correct arguments.

    Command line actions:

    • Extract algorithm-ready datasets out of redis to a file

      sa -t SPY -e ~/SPY-$(date +"%Y-%m-%d").json
      
    • View algorithm-ready datasets in a file

      sa -t SPY -l ~/SPY-$(date +"%Y-%m-%d").json
      
    • Restore algorithm-ready datasets from a file to redis

      This also works as a backup tool for archiving an entire single ticker dataset from redis to a single file. (zlib compression is code-complete but has not been debugged end-to-end)

      sa -t SPY -L ~/SPY-$(date +"%Y-%m-%d").json
      

      Warning

      if the output redis key or s3 key already exists, this process will overwrite the previously stored values

  2. Run an Algorithm

    Please refer to the included Minute Algorithm for an up to date reference.

    sa -t SPY -g /opt/sa/analysis_engine/mocks/example_algo_minute.py
    
analysis_engine.scripts.sa.restore_missing_dataset_values_from_algo_ready_file(ticker, path_to_file, redis_address, redis_password, redis_db=0, output_redis_db=None, compress=True, encoding='utf-8', dataset_type=20000, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'], show_summary=True)[source]

restore missing dataset nodes in redis from an algorithm-ready dataset file on disk - use this to restore redis from scratch

Parameters:
  • ticker – string ticker
  • path_to_file – string path to file on disk
  • redis_address – redis server endpoint adddress with format host:port
  • redis_password – optional - string password for redis
  • redis_db – redis db (default is REDIS_DB)
  • output_redis_db – optional - integer for different redis database (default is None)
  • compress – contents in algorithm-ready file are compressed (default is True)
  • encoding – byte encoding of algorithm-ready file (default is utf-8)
  • dataset_type – optional - dataset type (default is SA_DATASET_TYPE_ALGO_READY)
  • serialize_datasets – optional - list of dataset names to deserialize in the dataset
  • show_summary – optional - show a summary of the algorithm-ready dataset using analysis_engine.show_dataset.show_dataset (default is True)
analysis_engine.scripts.sa.examine_dataset_in_file(path_to_file, compress=False, encoding='utf-8', ticker=None, dataset_type=20000, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'])[source]

Show the internal dataset dictionary structure in dataset file

Parameters:
  • path_to_file – path to file
  • compress – optional - boolean flag for decompressing the contents of the path_to_file if necessary (default is False and algorithms use zlib for compression)
  • encoding – optional - string for data encoding
  • ticker – optional - string ticker symbol to verify is in the dataset
  • dataset_type – optional - dataset type (default is SA_DATASET_TYPE_ALGO_READY)
  • serialize_datasets – optional - list of dataset names to deserialize in the dataset
analysis_engine.scripts.sa.run_sa_tool()[source]

Run buy and sell analysis on a stock to send alerts to subscribed users

Set S3 Environment Variables

Set these as needed for your S3 deployment

export ENABLED_S3_UPLOAD=<'0' disabled which is the default, '1' enabled>
export S3_ACCESS_KEY=<access key>
export S3_SECRET_KEY=<secret key>
export S3_REGION_NAME=<region name: us-east-1>
export S3_ADDRESS=<S3 endpoint address host:port like: localhost:9000>
export S3_UPLOAD_FILE=<path to file to upload>
export S3_BUCKET=<bucket name - pricing default>
export S3_COMPILED_BUCKET=<compiled bucket name - compileddatasets default>
export S3_KEY=<key name - SPY_demo default>
export S3_SECURE=<use ssl '1', disable with '0' which is the default>
export PREPARE_S3_BUCKET_NAME=<prepared dataset bucket name>
export ANALYZE_S3_BUCKET_NAME=<analyzed dataset bucket name>

Set Redis Environment Variables

Set these as needed for your Redis deployment

export ENABLED_REDIS_PUBLISH=<'0' disabled which is the default, '1' enabled>
export REDIS_ADDRESS=<redis endpoint address host:port like: localhost:6379>
export REDIS_KEY=<key to cache values in redis>
export REDIS_PASSWORD=<optional - redis password>
export REDIS_DB=<optional - redis database - 0 by default>
export REDIS_EXPIRE=<optional - redis expiration for data in seconds>