As a trader and investor, I’m constantly trying to make profitable stock trades and invest more wisely. I’m primarily looking for ways to increase returns, decrease risk, and to make smarter decisions.
One of the ways I make better trades and invest more wisely is through the use of various trading and investing tools. There are many great tools and web sites out there to help trading and investing research, such as R, Yahoo! Finance, Google Finance, and Wolfram Alpha, to name a few. Sometimes I’ll even create my own tools to use.
The stock price decline checker tool checks a list of stocks to see if the price has declined a certain percentage from its maximum price over a specified number of days.
The goal is to identify primarily index ETFs that have declined substantially in price from their maximum values over the past few weeks or months, which could potentially signal a buying opportunity.
One way I use the stock price decline checker is to see if index ETFs such as QQQ, SPY, or DIA have declined significantly from their max prices over the past 2 months. To be more specific, I’ll run the stock price decliner checker to check if the index ETFs have fallen over 7.5% over the past 50 days. This kind of fall would indicate a substantial drop in the overall market price, such as a market correction, and to me this could signal a good buying opportunity.
Here are some links to my other trading and investing related web sites and tools:
Auto-Scaling Celery Workers with Rackspace Cloud Servers
I have been working on an automated stock trading system for some time. Part of my automated trading system involves a lot of number crunching and calculations, such as for technical analysis and neural networks. Processing large amounts of data for thousands of stock tickers can be very time consuming and take a significantly long time to fully process the data. I have begun creating additional worker nodes so that I have more processing power available in order to complete the data processing and number crunching much faster.
I have created some tools to help me manage these additional worker nodes and simplify the scaling up and scaling down of the data processing workers. My tools will automatically scale up worker nodes using the Rackspace Cloud Servers API and then start my python celery workers to crunch the data and process the tasks in my rabbitmq queue. When the processing has been completed and the tasks in queue are zero, the auto-scaling script will spin down and destroy the worker cloud servers.
The auto scaling tool checks my rabbitmq queue size and if there are a large amount of tasks in the queue, it will create a number of cloud servers using the Rackspace Cloud API and an image template I have created and saved at Rackspace Cloud.
Basically it works like this:
Each worker instance is built from a template image, so it has the exact same packages and code base.
If the queue size is very large, then create a bunch of workers.
If the queue size is 0, then delete the additional workers, to save money by removing instances we’re not actively using.
I’m using Fabric for python to create the celery worker servers. The celery workers then work on tasks in a RabbitMQ messaging queue. Fabric is a very powerful tool that can run commands automatically on the newly created servers. I am also using the python-cloudservers python package to interface with Rackspace Cloud API and create the servers. After the servers are created, I’m using rsync to copy over my code base to the newly created cloud servers. Fabric then starts up the celery worker daemon on the newly created worker nodes. The celery daemon on the worker then takes care of the rest and starts processing the tasks.
The script I’m using to auto-scale up and and down is a custom script I have written. It runs the auto-scale.py via a crontab entry on my primary/master processing server.