Wednesday, October 22, 2014

Finding Correlated Securities

Recently, I completed a project that was a challenge posted on MindSumo. MindSumo is a site where companies can post challenges with rewards for students to compete to solve. The challenge was to find securities in the DJIA whose prices behaved similarly over a period of roughly 2 years (data set was provided). This problem of finding correlated securities was interesting, so I thought I'd share my solution here.

To be more specific about the challenge, we were asked to create 2 or more indices of securities from the Dow. The program had to output the lists, and there was sort of a subtle hint that there should be some visualization of data as a result as well. Top 5 solutions get $150 = I'm in.

My Solution

I decided to use Excel and VBA because, well, it's easier that way. The stock data was saved as .csv files in a single folder, so I just needed to open them and put everything I needed on one sheet - something I've done before. No reason to make this overly complicated, I'll stick to the Basics (get it? VBA...Basics...nevermind).

The stock data we received was essentially the usual output from Yahoo Finance, except that the company name was placed on the sheet as well (for easy access I suppose). I decided to only use the Adjusted Close prices, since the time period was long enough that accounting for dividends and/or splits was necessary. So, I created one sheet containing the adjusted close prices for each security. The frequency was daily, just to be clear.

Once I had the prices for each security, I calculated the daily return for each as well. We typically deal with returns in class, and many models use return rather than price, so this may have been strictly out of habit. I also calculated the trailing 30-day standard deviation of the prices. Each of these were calculated and placed on a separate worksheet. Ultimately, these things were a complete waste of time, and I wish I would have known in advance that they would be useless to me later on.

After creating all of those extra sheets, it was time to begin to tackle the real problem - finding correlated securities. Most people would have reached for the standard correlation matrix first, but not I. It was (almost) immediately apparent to me what a rabbit-hole the correlation matrix would be, and how painful it would be to work with it. If you insisted on a minimum correlation between every stock, no stocks would ever be selected. Thus, I thought of a new way to approach the problem. Perhaps rather than looking at the correlation of every individual stock pairing, what if I could look at only 30? But how would I even go about doing that? Well, this is where a bit of luck was involved.

Recall that the transitive property is "A = C and B = C implies A = B". So, two different quantities being equal to a third different quantity means that everything is equal. In this case, I would just need some independent reference point to compare correlations with. I decided to use the average price of the stocks in the index as this reference point. For each day, I sum up the prices of every security in the index and divide by the total number of securities in the index. After that, I took the correlation of this series with each series of security prices. Unfortunately, correlation doesn't have a transitive property in theory. However, as all of the correlations approach 1, the transitive property comes back into play because a correlation of 1 implies that they move together exactly. Luckily, stock prices move together quite frequently! There were quite a number of stocks that had correlations with the average security price that topped 0.9. In fact, over half of the stocks were over 0.9. There were ten stocks that had a correlation with the average security price of the index that was over 0.95, which I thought was truly astonishing. Since 0.95 is pretty close to one, I think it is safe to assume that these ten stocks will all be highly correlated with each other. With that discovery, the work for portfolio #1 was done! Regretfully, it took many hours to figure this out the first time.

My moment of joy was quickly overrun with new ideas for my next portfolio (I needed a minimum of two, remember?). At this point in the process, I was lost. Daily return and rolling 30-day variance proved to have very little correlation when compared to prices. Eventually, I had to face the facts that I could not escape the correlation matrix. I would have to tackle it head-first...or maybe I wouldn't. A sort-of logical next step would be to look at the average correlations among the stocks. To do this, I created a full correlation matrix and each column. I subtracted 1 from each sum to exclude a stock's correlation with itself. I then divided the sum by N-1 to find the average correlation that stock has with the other stocks in the index. This was not as foolproof as the first method, but it seemed to work well enough; over half of the securities had average correlations of 0.75 or higher. I took the ten stocks with the highest average correlations to be portfolio #2.

It was near this time that tragedy struck. Somehow, Microsoft screwed me over. I don't know how, but they should know that I have a very particular set of skills, and when I find them.....Long story short, my file got corrupted and I lost hours of work. I have a decent memory, so I was able to recreate everything that had been lost, and I was just thankful that I didn't lose everything. I will say that I had been saving frequently and that AutoRecover was absolutely zero help this time.

Once I had recovered from the setbacks, I set out to find a third approach to the problem. I stepped back and tried to simplify my thinking, and I'm glad I did. I began to ask myself, "If I was just handed a chart of security prices, how would I identify those that behaved similarly?" That question got my mind out of the world of data-crunching long enough to see the third part of the solution - draw a line! To be more specific, securities that had similar trendlines would be securities that in some way "moved together". So, I use simple regression to find the line of best fit for each security, and I took the group of 10 that had the most similar slopes. Certainly not a slam dunk like the first method, but it does give some sort of answer to the question.

To polish it off, I created a summary worksheet and also made charts for each index of 10 securities. As a teaser, here is the chart for index 1. It is pretty easy to see that the stocks make significant movements together.
Chart of Index 1

The contest isn't over yet, so congrats if you found this and rip off my idea! Seriously though, I hope my ideas help somebody else tackle this problem in the future. I also hope that I win some money from this, especially after the technical frustrations it put me through (I'll update this post if I win). I'll probably end up doing some more MindSumo challenges once I get some extra time on my hands, and I will make sure to post about them as I complete them. If they are at all like this one, challenging but fun, I would recommend them to all students.


Tuesday, September 16, 2014

The Role of Market Participants in Equity Market Reform

Recently, the financial firm BlackRock released a report to the SEC offering the firm's views on necessary market reform. The 7-page report points out a number of important issues that BlackRock thinks could be solved that would increase investor - and public - confidence in the markets, without a major overhaul of the markets. Points of focus that I can really support include
  • Reducing market structure and order type complexity
  • Modernization of data access (make the exchange data feeds work as well as the privatized ones)
  • Decrease exchange access fees
  • Promote greater transparency at every level of the market
I think this is a great move by BlackRock. They have a lot at stake here ($4.32 trillion in assets under management according to their website), so being proactive is a good play. Perhaps this will strike up a trend and we'll see other large financial companies do the same. 

Let's hope that these market reforms are enough to satisfy the general public, who right now think all of Wall Street acts something like this guy...
Wolf of Wall Street
We'll see. Anyways, I really like what BlackRock has done. I think they put forward some good recommendations for reform that seem very researched and mostly for the public good rather than to the benefit of the company.

Saturday, September 13, 2014

R Empirical Finance


I don't have much to say in this post, just that I wish I would have found this sooner! There is a CRAN page called Empirical Finance that is an absolute goldmine of, you guessed it, R stuff related to empirical finance and econometrics. The page is managed by Dirk Eddelbuettel, who is a quant researcher and developer. Dirk has contributed many great R packages and helps maintain even more, including Rcpp and RQuantLib. If you are looking for an R package that can help you with your data analysis or time series modeling or whatever crazy thing you do on your Saturday nights, make sure to check out the R Empirical Finance page.

If you have no idea what the heck this R thing is, check out my Getting Started in R post.

I'm currently cooking up a few more detailed posts, which are essentially turning out to be Why Finance is Wrong: A Three-Part Series, so be sure to check back for those - should be interesting.

Wednesday, September 10, 2014

How to Download Historical Prices for Multiple Stocks

Downloading historical prices for multiple stocks has always sort of been a pain for me. Now, at the University I have access to a nice (read: expensive) software suite called DataStream that makes it decently easy to download data for hundreds of stocks. But, I don't have 24/7 access to that computer, and my couch is a heck of a lot closer than that computer lab. So, the problem remains: How do I download historical prices for more than one stock at a time?

Sure, you can look at my post about Freely Available Financial Data. Unfortunately, even using the trick about the Yahoo CSV URL will only allow you to download 200 at a time, and it is tough to do correctly.

I'll show you in this post how to download historical prices for every S&P 500 stock using two programs: R and Excel. First off, if you aren't familiar with R, see my post on Getting Started in R, which should get you ready to go for this post.

The Tickers

What you'll really need first is a list of the ticker symbols for every stock in the index. Luckily, there exists a Wikipedia page for everything these days - even a list of S&P 500 companies. Highlight with your mouse and copy the entire table of companies, then paste into Excel. It should look something like this.
Freshly copied table from Wikipedia page.
Next, clear all of the formats and hyperlinks. The easiest way to do this is to select everything (click the triangle in the upper left corner) then head over to the right side on the Home Tab where it says Clear. In the drop down menu, select Clear Hyperlinks and then Clear Formats. The data in the sheet should now be plain text. This is all on Excel 2013, so if you are using a different version the steps may be different for you. It is up to you, but I would delete everything except the first two columns. Save this file as a .csv type in a place you can easily access by going to File - Save As and selecting CSV (Comma Delimited). Leave Excel open, we'll come back to it.

Open up RStudio, and import the .csv file you just created by going to Tools - Import Dataset - From Text File. Give the imported data the name companies and deselect the Strings as factors option as shown below.
Importing list of companies.
Click Import and this data will be saved in an object named companies within your R Environment. From this, we want to extract the first column to have a list that is exclusively made up of ticker symbols. The code below will create a tickers vector that is exactly that, we are storing the column named Ticker.symbol from companies into tickers. Type: tickers <- companies$Ticker.symbol
tickers <- companies$Ticker.symbol

The Data

It is time to load up the workhorse of this process, the quantmod package. Select it under the Packages tab or type library(quantmod). This will give us access to all of the functions within the package. The quantmod package allows a user to download financial data by using back-end APIs. The first function of interest here is the getSymbols() function. Call the getSymbols() function and pass it the tickers object. Type: getSymbols(tickers)

Be patient here, this will take a few minutes depending on how many tickers you request and your connection speed. This function will download the historical data from Yahoo Finance in the normal format you'd get if you went to the website and downloaded it yourself. Thus, it contains Date, Open, Close, High, Low, Volume, and Adjusted Close data. Notice that it says it is pausing for a second between requests, this is due to the restrictions Yahoo places on the requests - they are limited to 200 tickers at a time. By pausing intermittently, the requests become smaller chunks, and it can download all of the tickers. This will create an xts object for every single ticker containing the data for that stock, and it will be named with the ticker symbol.

While that is running, we can prepare the next few steps. Go back to your Excel sheet containing the list of companies. In the next column, we will prepare a concatenated string. First, some quick background on the quantmod functions. As stated previously, the getSymbols() command will return an xts object with certain data series in it. We can get at those specific data series using the functions below:
  • Op() - Open price 
  • Cl() - Close price 
  • Hi() - High price
  • Lo() - Low price
  • Vo() - Volume
  • Ad() - Adjusted close price 
So, if I just want the adjusted close price for CAT, I type Ad(CAT) and I get it. To store that in a vector, just use the <- syntax as is normal in R. 

But what if I only want a specific date range? That's possible too. Suppose I wanted from July 1, 2014 to August 31, 2014, I would type Ad(CAT['2014-07-01::2014-08-31']). The dates must always be in yyyy-mm-dd format, so I must type 07 for July and not just 7. If I just wanted from July 1, 2014 through today, I would type Ad(CAT['2014-07-01::']), essentially just not restricting the end of the data set.

We must now construct a column of commands like Ad() or Hi(). For this example, suppose we want adjusted close prices from July 1, 2014 until today. In a blank column in Excel, type =CONCATENATE( "Ad(", A2, "['2014-07-01::']),") which will create a column of functions like the above. Again, copy this down to the end of the data set. Afterwards, copy the entire column and then paste it as values to convert it to plain text. Also, don't forget to delete that pesky comma off of the very last item. Copy the data in this column, not by clicking on the column header, but by highlighting the specific cells (don't forget your CTRL+SHIFT+DOWN shortcut here!).
Add the concatenated strings in Column C.
Hopefully by now quantmod has finished downloading the data for all of your stocks. If not, I'm sorry. Perhaps this is all the justification you need to upgrade to a faster internet package - I don't know. Anyways, when it is done go back to the RStudio console and type stockprices <- cbind( and delete the closing parenthesis so that the function isn't finished and you will get the + sign.
R is waiting for more input.
Now paste in the data we copied from Excel, make sure there is no comma at the end of it, and hit enter. You should still have the + sign indicating R is waiting for more input. Now add the closing parenthesis from before to complete the cbind() function. R will now execute the cbind() function which will bind all of those columns together in a matrix called stockprices for you, and it may not finish immediately, so be patient with it. When the cbind() command is done, you can type View(stockprices) to see the matrix in the viewer if you'd like. 

The final step is to save the table to a file, which can be done very easily from R. We will use the write.csv() method. Unfortunately, there is one little hiccup to this process; we can't use write.csv(stockprices, file="whatever.csv") because the dates will not be saved. Luckily for you, I know the workaround! The proper code is write.csv(as.data.frame(stockprices), file="yourfilenamehere.csv") where yourfilenamehere.csv is whatever you want to name the file. Open the file in Excel to see the results if you want - good as gold!
Finished product.

Summary

Using this method, you can download historical data for any securities with data on Yahoo Finance or Google Finance. I've shown you how to download the historical adjusted close prices for every stock in the S&P 500 Index, probably saving you lots and lots of time. After the first time using this process, you'll have everything figured out and ready to go and will only have to wait for the data to download. If you found this useful, great! If you have any problems with the procedure, let me know and I'll try my best to help. Do you have an even better (also free) method? Feel free to share in the comments.

Saturday, August 30, 2014

Suggested Reads

I don't get to read as often as I'd like, but I do make time every now and then for a good finance-related book. Here is a list of books I've read, and I'll update it periodically to reflect any new works I've completed. I'd recommend every book on this list, so no need to worry about if I liked it or if I didn't. If I ever feel super strongly one way or the other about a book I may write a post discussing my thoughts on it.

Behavioral Finance

  • Irrational Exuberance
  • Beyond Greed and Fear
  • Ethics in Finance

Economics

  • Freakonomics
  • The Return of Depression Economics
  • The Housing Boom & Bust

Investing

  • The Only Three Questions That Count
  • A Random Walk Down Wall Street

Quantitative Finance

  • Mark Joshi's On Becoming a Quant (not a book, but definitely a suggested read)

Currently on the shelf (next in line)

  • Predictably Irrational
  • Reminiscences of a Stock Operator
  • Superfreakonomics
  • Liar's Poker
  • My Life As A Quant

Sunday, August 24, 2014

Getting started in R

Many of my posts related to programming will include discussion on the R statistical programming language. I thought I would put out a short-and-sweet guide to getting started in R from a mathematical finance perspective. This guide will get you off the bench and into the game! ...meaning I'll help you get things set up properly for you to begin some analysis.

Installation

First thing you need to do is to install R from here. Those download servers will always have the latest version available for your operating system. *Speaking of which, I primarily run Windows unless I'm doing HPC on the supercomputer, so this guide will be heavily Windows-biased. 

That will install the R language, core packages and libraries, and a simple GUI interface to use. I've found that RStudio is a much better GUI, so the next step will be to immediately install the RStudio IDE from here.

Packages

The beauty of R is that there are thousands of packages that can be easily installed. These packages are created by other R users to add functionality that the base install of R doesn't have or to make it simpler to perform certain tasks. There are a number of core packages that are installed with the R language. Huge numbers of packages have been created to that will be useful for this type of analysis.

To install a new package, open up RStudio and look towards the bottom right of the window. You'll see a window with tabs along the top (as shown in the screenshot below). As the red arrow indicates, click on the tab titled Packages. Then, as the green arrow shows, click the button labeled Install.


A window will pop up that lets you install packages from the online repositories or from a downloaded zip file (see below).


Leave everything at the default setting unless you have some reason to change it. List packages separated by commas as it says. The box even autocompletes sometimes, which is a nice feature for when you don't know the exact name of the package.

Here I'll break down the packages that I'd recommend if you were going to start doing some mathematical and/or statistical analysis on financial data.

Finance

quantmod is a great package that can download data straight from Yahoo Finance, Google Finance, and a few other sources. [See my post on freely available data sources.] Quantmod also contains a number of cool functions for analysis of times series.

RQuantLib is the R counterpart to the QuantLib project. The QuantLib project is trying to bring a steady library of useful quant functions to popular programming languages like C++.

tseries is a package with lots of functions for creating and dealing with time series.

TTR is a package that makes it possible to create trading rules as functions.

forecast adds the ability to forecast time series using a variety of methods.

To install these packages, copy and paste this line in the Packages text box in the window shown in the screenshot above: quantmod, RQuantLib, tseries, TTR, forecast

There are many other packages available under the finance heading at http://www.rdocumentation.org/domains/Finance.

Bayesian Methods

LearnBayes is a package I've found to contain some helpful Bayesian functions.

MCMCpack will be very useful to empirical Bayesians.

bnlearn implements Bayesian networks.

To install these packages, copy and paste this line in the Packages text box in the window shown in the screenshot above: LearnBayes, MCMCpack, bnlearn

LaplacesDemon is a package full of Bayesian methods. It has been really useful in some of my recent work. This package is not available on CRAN (the online repository), so you will have to download the file from the website and install it separately. To do that, just change the dropdown from Repository to Package Archive File and navigate to where you downloaded the package.

As always, there are many more packages available on http://www.rdocumentation.org/domains/Bayesian, especially for users familiar with BUGS.

High-Performance Computing

foreach is a package that adds the common foreach method from many other programming languages into R, except it can handle parallel processing.

Rcpp integrates the R language with C++, so that C++ programs can call R functions and R programs can call C++ functions.

doParallel creates the clusters used in packages like foreach.

plyr brings many of the standard R functions into HPC territory by adding the ability to parallel process.

To install these packages, copy and paste this line in the Packages text box in the window shown in the screenshot above: foreach, Rcpp, doParallel, plyr

There are many other packages available under the finance heading at http://www.rdocumentation.org/domains/HighPerformanceComputing.

General


xtable creates tables that are LaTeX-ready. I've yet to use this one, but I'm pretty excited it exists.

shiny is the package that allows you to create awesome R web apps. See this for examples. This is another package I haven't yet had the opportunity to try out, but I can't wait to get to work with it.

To install these packages, copy and paste this line in the Packages text box in the window shown in the screenshot above: pso, xtable, shiny

Tutorials

Now you got it all installed and ready, the only *minor* thing left is how to use it. There are a number of books available, as well as many online tutorials. The built in help is really quite extensive, and that is usually what I consulted first. As with many other programming languages, looking at code is the best way to learn it, so that's what I did. Go online and find examples of what you are trying to do - there are bound to be some out there. Searching "r [package name] tutorial" should bring up many results for any of the packages I've given here because they are all widely used. Another great place to start is this StackExchange question on the very same topic. 

I'll make a post from time to time detailing how to perform some type of analysis in R. Also, I'll try to include somewhat detailed instructions if I'm discussing some analysis I did in R, even if it isn't a "How To" type of post.

That's it for this guide! You should be all set to begin learning R and performing some mega-awesome data analysis.


From Financial Analyst to Data Scientist

With the recent article on the Wall Street Journal about data scientists being rock star high-tech workers, I've seen many people questioning whether they should have went into Big Data rather than finance. I think the answer to this second-guessing is simple: do both. Having data analysis skills will get you just as far in finance as it will in tech. It could even be a back-up plan if finance doesn't work out.

The financial analyst position (or quantitative analyst, etc) is more and more becoming a data analyst position. It is up to the analyst to discern whatever can be learned from the data that is available and to make predictions and forecasts using those analyses. Financial analysts need to know how to use and interpret data.

Additionally, if finance doesn't pan out, the skills of data analysis are transferable to a large number of other industries - like tech, healthcare, and marketing, to name a few. Do some of these industries offer higher salaries than others? You bet. I'll remind you, though, that if you are picking one career path over another simply based on a small subset of highly paid individuals within that field..."you're gonna have a bad time."

you're gonna have a bad time.jpg


To sum it up, you're missing the point if you're asking this question. You should be a data scientist first and foremost. Where you choose to work, however, is up to you.

Friday, August 22, 2014

Freely Available Financial Data

Have you heard the phrase "Data is cheap"? Well, that may be true on average, but good sources are hard to find for free in finance. There are many paid options ranging from a few dollars per month to thousands of dollars per year. When you don't have the money to shell out to get a DataStream subscription or a Bloomberg Terminal, it can be frustrating finding the data that you need. I've compiled a list here of freely available sources of financial and economic data that I frequently use in my own work.

YAHOO FINANCE

Yahoo Finance is a great place to go for historical stock data. The best part, in my opinion, about Yahoo Finance is using the URL to download stock prices into a CSV file

GOOGLE FINANCE

Google Finance is very similar to Yahoo Finance in regard to what data is available. I think Google does a good job of letting you compare a company's chart against its competitors though. Potentially a little easier to use and more appealing, but I'm not really concerned with petty things like that when I'm hunting for data.

FRED

The Federal Reserve Bank of St. Louis maintains a huge database of economic data called FRED (Federal Reserve Economic Data). This is my first choice when looking for anything financial or economic related that is on a macro level. The FRED Excel Add-in makes it super easy to download data series as well.

QUANDL

Quandl is something I've just recently stumbled upon. Its website currently states that it has over 10 million free data series available for download. The interface is easy to use. There are also a number of packages to make it easy to connect with statistical languages like R and MATLAB.

FINRA

Looking for fixed income data has always frustrated me. FINRA is one place that has bond data available. Unfortunately, there is no super simple way to get the trade history for a bond because they don't provide download links. This can be useful when examining corporate bonds.

FINVIZ

If I'm just looking at price charts, FINVIZ is my first choice. It has a great stock screener allowing you to screen by many different things including fundamentals, technicals, patterns, and trends.

These are my favorite free data sources, but of course there are many more out there. I consider these to be reliable sources, both in terms of availability and legitimacy. For many common projects, these sources should be enough to satisfy your needs.


Tuesday, August 19, 2014

Coding is one of the most important skills to have in finance;

There are many other articles saying this in one way or another, a simple Google search will show that. Why am I pushing out another one?

Because it is needed.

I see many people, especially business students, who still think they don't need to learn to code (same thing with math, but I'll make that another post...or 50). Well, that's complete nonsense. My experiences alone prove otherwise.

Internship #1: As a young intern, I became the Excel VBA and automation guru of the main office of a publicly-traded insurance company full of actuaries and other people already proficient in Excel. I completed major projects that lead to immediate results for the company. They would have kept me around as long as I wanted, but classes and extracurricular activities got in the way.

Internship #2: I became the Excel VBA and automation guru (...sound familiar?) of a division of one of the largest international manufacturing companies in the world. The Excel automation projects I completed saved the financial analysts several hours of manual work during monthly reporting. I also completely remade the product pricing model using Excel, which would directly impact profitability. Since leaving, this company has asked me multiple times to return in a couple of different roles.

Extra Project #1: I was part of a team working with a group of financial advisors who had little coding ability, but found themselves in need of a few automated tasks and other things that required programming knowledge to get done. Can't speak much about this one due to NDA, but knowing the R language was a big plus.

Extra Project #2: I scored a contracting/consulting gig as a freelance programmer with another financial advisor. Much like the above project, this advisor needed someone who understood finance, but could work with languages like Excel VBA and R.

Obviously, not everyone will have these experiences like these....but they could if they knew how to code. Coding in finance is not just limited to high frequency trading, hedge funds, prop shops, and investment banks. I encountered it first-hand in insurance, manufacturing, and financial advising. Because I knew how to do it when others didn't, I had an advantage. Is it possible to get a job without this knowledge? Yes, but you better have something else about you that stands out.

It may not be necessary to put in the time to become an expert programmer, but being at least somewhat well-versed in something like Excel VBA, R, or SPSS doesn't take much time and has many benefits. More complicated projects may require an object-oriented language like C++, C#, or Java. To deal with the core issues, randomness and uncertainty, knowing a programming language is key.

To make it easier and put my suggestions out there, if I was starting from scratch and wanted to learn one of these languages, I'd learn R. Why? Because it is free, open-source, used by millions of statistical professionals, and has thousands of add-in packages available. I use Rstudio, which is a great and painless way to work with R. I've written a short-and-sweet guide to help you Get Started in R.

The CORE

Why did I choose CORE? Simple. Cores are at the center. Cores are important. The finance CORE is no different. These are things that are at the center of financial world.

CORE is:
  • Coding
  • Objectivity
  • Randomness
  • Estimation

To deal with that, financial professionals use:
  • C++/C#
  • Object-oriented Programming
  • R
  • Excel
Thus, that is the topic of this blog. These posts will discuss topics that are central to finance - at its core.

At a somewhat deeper level, I view much of what is discussed about finance, be it on the news or in the classroom or in the workplace, to be mostly superficial. Just look at the headlines "Dow 17,000 and why you should sell now!" etc. These topics litter the landscape, and, truthfully, I find them quite boring. You won't see me spouting doom-and-gloom unless I've got some serious factual support to back it up. Make-believe milestones don't lie at the heart of finance. Uncertainty is at the core of finance. Randomness is at the core of finance. These are the topics that need discussed, and that is exactly what I plan to do.

Sunday, August 17, 2014

Hello World.

I'm excited to get this blog started! I've had this in mind for quite some time. I intend to post at least once a week on several topics within finance, such as

  • Statistics
  • Mathematics
  • Programming
  • Data Mining & Analysis
  • Econometrics
  • Econophysics
  • Stochastics
  • Optimal Control
and whatever else comes up. I'll also talk about current events as I see them related to this blog. 

Hope you will find it useful and enjoyable!