Excel autocorrelation

Please review my resume. Looking for internships and entry level jobs.

2024.05.28 19:17 Silly-Earth-9553 Please review my resume. Looking for internships and entry level jobs.

submitted by Silly-Earth-9553 to resumes [link] [comments]


2024.03.26 17:56 DirtyVader10 Implementing an autocorrelation formula


https://preview.redd.it/t5xs70a1lpqc1.png?width=232&format=png&auto=webp&s=15d99c18f80e658f266ba3a55370f6ed5b88a348
The formula above is the lag-1 autocorrelation of rt. Where rt is the return of an asset at time t.
I have a column of returns for an asset and need to implement this formula somehow, but I have essentially never used excel before and I can't find anything on this by googling.
How would you use this formula in excel?
submitted by DirtyVader10 to excel [link] [comments]


2024.03.20 18:24 Acceptable_Storm_894 Point Pattern Analysis

Hello one and all, a question here about analyzing some point data. I'm working within ArcGIS Pro with a professional license, so most of Esri's tools are available for me to use.
I have a dataset containing (i) the names of certain businesses, (ii) the coordinate locations of those businesses, and (iii) a label for what kind of business it is (e.g., Food, Retail, etc.). The dataset is not one I've created, and my job isn't to change/reassign the label for the business. There are 10 possible labels.
In the original dataset, some businesses contain more than one label. E.g., Business A might be both Food and Retail.
My tasks are to:
(1) Identify whether businesses of the same label are clustered, dispersed, or randomly distributed across the study area (state of Wisconsin). In my situation, ideally they will be dispersed, allowing for greater accessibility across the study area.
(2) Identify whether businesses of different labels are clustered, dispersed, or randomly distributed across the study area. In my situation, ideally businesses of different labels are clustered, allowing for greater variety where lots of business are present (such as in a city, where clusters of businesses are more likely).
To prepare the data, I have:
(A) Parsed out the field of labels into several fields, since original values contained lists. Now, there is one field for each label, where businesses assigned that label have a value of 1 and businesses not assigned that label have a value of 0.
E.g., original data (in Excel):
Business Label
A Food
B Food
C Food, Retail
D Retail
E.g., parsed data (now in ArcGIS):
Business Food Retail ...
A 1 0 ...
B 1 0 ...
C 1 1 ...
D 0 1 ...
My thinking is:
(1) Spatial Autocorrelation (Moran's I) is used to evaluate clustering, dispersion, and randomness when feature locations and feature values matter. Is there a way I can merely evaluate the data based on location? Is using Average Nearest Neighbor more in the right direction?
(2) I am really struggling to conceptualize the appropriate way to do this, as far as a way I can make the labels matter. There is hypothetically a way to evaluate categorical data like this, no? Could I assign a number to each label, and let's say run Spatial Autocorrelation--in that case, I can account for the value, but how do I account for the fact that some businesses have multiple labels?
Any suggestions are well-appreciated, thanks.
submitted by Acceptable_Storm_894 to askgis [link] [comments]


2024.02.23 06:21 bosebosebosebosebos I've applied to over 500 data analysis/ data science related internships and gotten about 7 interviews. Soon to be a senior

submitted by bosebosebosebosebos to resumes [link] [comments]


2023.07.16 18:40 moeajaj Ways to enhance my volatility model code

I am fairly new to coding, and I am using Pyhton to analyze data for my thesis. It is related to volatility and I wonder if there is a way to enhance my code. If there is a suggestion to use a different model, I will be more than happy to listen to the reasons. Thanks in advance.

import pandas as pd
from arch import arch_model
from statsmodels.stats.diagnostic import acorr_ljungbox
import statsmodels.api as sm
import matplotlib.pyplot as plt

#Excel
xls = pd.ExcelFile('/xxxxx/xxxxxx/xxxxxx/xxxxxxxx/xxxxxxx .xls')

#Sheets
sheets = xls.sheet_names

for sheet in sheets:
#Data from sheets
data = pd.read_excel(xls, sheet_name=sheet)

#Column name as the index
if 'Date' in data.columns:
data.set_index('Date', inplace=True)
elif 'Trade Date' in data.columns:
data.set_index('Trade Date', inplace=True)

#'Close' column
if 'Close' in data.columns:
prices = data['Close']
elif 'Price' in data.columns:
prices = data['Price']

#Calculate returns and rescale
returns = prices.pct_change().dropna() * 1000

#Specify the GARCH-M model
model_garchm = arch_model(returns, vol='Garch', p=1, q=1, mean='ARX', lags=1)

#Fit the GARCH-M model
res_garchm = model_garchm.fit(disp='off')

#Print the sheet name and model summary
print(f'Ticker: {sheet} - GARCH-M Model')
print(res_garchm.summary())

#Specify the APARCH model
model_aparch = arch_model(returns, vol='APARCH', p=1, o=1, q=1)

#Fit the APARCH model
res_aparch = model_aparch.fit(disp='off')

#Print the APARCH model summary
print(f'Ticker: {sheet} - APARCH Model')
print(res_aparch.summary())

#Specify the GARCH model
model_garch = arch_model(returns, vol='Garch', p=1, q=1)

#Fit the GARCH model
res_garch = model_garch.fit(disp='off')

#Print the GARCH model summary
print(f'Ticker: {sheet} - GARCH Model')
print(res_garch.summary())

#Plot GARCH-M model fit
fig_garchm = res_garchm.plot()
plt.title(f'{sheet} - GARCH-M Model Fit')
plt.show()

#Plot APARCH model fit
fig_aparch = res_aparch.plot()
plt.title(f'{sheet} - APARCH Model Fit')
plt.show()

#Plot GARCH model fit
fig_garch = res_garch.plot()
plt.title(f'{sheet} - GARCH Model Fit')
plt.show()

#Analysis of residuals for GARCH-M model
residuals_garchm = res_garchm.resid

#1. Plot the residuals for GARCH-M model
plt.plot(residuals_garchm)
plt.title(f'{sheet} - GARCH-M Residuals')
plt.show()

#Analysis of residuals for APARCH model
residuals_aparch = res_aparch.resid

#1. Plot the residuals for APARCH model
plt.plot(residuals_aparch)
plt.title(f'{sheet} - APARCH Residuals')
plt.show()

#Analysis of residuals for GARCH model
residuals_garch = res_garch.resid

#1. Plot the residuals for GARCH model
plt.plot(residuals_garch)
plt.title(f'{sheet} - GARCH Residuals')
plt.show()

#2. Test for autocorrelation using Ljung-Box test for GARCH-M model
ljung_box_garchm = acorr_ljungbox(residuals_garchm, lags=[10])
print(f"Ljung-Box test (GARCH-M): {ljung_box_garchm}")

#2. Test for autocorrelation using Ljung-Box test for APARCH model
ljung_box_aparch = acorr_ljungbox(residuals_aparch, lags=[10])
print(f"Ljung-Box test (APARCH): {ljung_box_aparch}")

#2. Test for autocorrelation using Ljung-Box test for GARCH model
ljung_box_garch = acorr_ljungbox(residuals_garch, lags=[10])
print(f"Ljung-Box test (GARCH): {ljung_box_garch}")

#3. Test for normality with a Q-Q plot for GARCH-M model
sm.qqplot(residuals_garchm, line='s')
plt.title(f'{sheet} - GARCH-M Q-Q plot')
plt.show()

#3. Test for normality with a Q-Q plot for APARCH model
sm.qqplot(residuals_aparch, line='s')
plt.title(f'{sheet} - APARCH Q-Q plot')
plt.show()

#3. Test for normality with a Q-Q plot for GARCH model
sm.qqplot(residuals_garch, line='s')
plt.title(f'{sheet} - GARCH Q-Q plot')
plt.show()

submitted by moeajaj to pythontrading [link] [comments]


2023.04.30 03:00 Bishops_Guest fMRI was briefly mentioned in “trouble with sugar” as study for sugar triggers the same parts of the brain as cocaine. Reminded me of my all time favorite scientific debunking.

fMRI was briefly mentioned in “trouble with sugar” as study for sugar triggers the same parts of the brain as cocaine. Reminded me of my all time favorite scientific debunking. submitted by Bishops_Guest to MaintenancePhase [link] [comments]


2023.01.24 23:36 Relative-Top-7006 Graduate student (Data Science) graduating in May 2023, asking for advice after three years of no employment success after Undergrad.

The situation:
I am a Data Science (Applied Statistics and Analytics) graduate student graduating this semester and have been searching for full-time employment for the past three years. My concentration is Classification, but I am very well-versed in the mathematics behind data-driven Methods. My undergrad was in Chemistry. I will be 22 by the time my Data Science program finishes.
My approach:
Throughout the past 3 years, I've taken my resume to 2 different career advising centers. The consensus from the staff is that it's short, gets the point across, and is very polished. I make sure to write a fully tailored Cover Letter and augment my resume to fit each position I apply to.
Just for good measure, I reach out to the hiring manager on LinkedIn a day after applying to reinforce my application.
Right now, my program has a student list where we get direct emails from employers specifically asking for graduate students. I personally reach out to these employers, write a custom letter, search up company beliefs and values, and I still do not hear back.
I also have a YouTube playlist consisting of video presentations demonstrating the various techniques I learned throughout the program.
I've completed projects for IHG and government work with the DOJ (I can't go into details about these because of confidentiality).
My father offered me additional instruction outside of class hours to help bolster my resume. He's a Mathematics professor who taught me skills from courses I never registered in such as Neural Network engineering (learned in advanced data mining) and proofs behind common techniques such as Principle Component Analysis, Time Series autocorrelation, and how to derive the equations for the mixed effects model.
My mother also made sure to share an Excel spreadsheet for me so we can keep track of the positions I've applied to. If she has free time, she occasionally updates it for me with positions pertaining to actuarial mathematics and classification methods.
Question:
My question is, what else should I be doing? I graduate in May 2023 and need a career I can use to pay off student loans. I'm not working part time right now because I have to commute an hour to school and an hour back home (two hours out of my day, three days a week) and use most of my other time either applying to jobs or working with my father to scrap together whatever project we can use to put on my resume.
submitted by Relative-Top-7006 to jobs [link] [comments]


2022.07.29 02:54 Fantastic-Flower-575 increment and decrement cell reference range

This is for autocorrelation, quick question about reference

=CORREL($B3:$B$463, $B$2:$B462)
=CORREL($B4:$B$463, $B$2:$B461)
=CORREL($B5:$B$463, $B$2:$B460)

On the left I have b3, b4 b5 incrementing and on the right I have 462, 461 460 decrementing They're lag factors, but not important to question
Excel won't or can't pick up the pattern I need 27 more rows, was looking to see if there is a better way than going thru and cutting out the number
Thanks
submitted by Fantastic-Flower-575 to excel [link] [comments]


2022.03.27 20:59 DismalFan1 graduating soon, how realistic is it for me to get a job as a junior quant trader?

I will graduate soon with a bachelor degree in bank and finance, with a specialization in risk management.
I will start to apply, and I would like to have your opinion wether it is realistic for me to do so.
To give context, those are the main classes I had :
And here is what I studied by myself:
-Times series analysis (stochastic process, stationnarity, autocorrelation, times series transformation, cointegration, a little bit of forecasting)
And finally this is what I did as a quant project :
I also traded with a small account for the past two years, didn't make a lot of money, but my account is pretty small, so it's not surprising.
Do you think I have a shot ?
I'm sorry if the question sounds weird, but the quant industry is really small compared to traditional finance in my country. So I have almost no informations about it(I'm swiss).
submitted by DismalFan1 to quant [link] [comments]


2022.01.30 01:36 braaaaiiinnnsss STATISTICS ROCKS! Updated post from SuperstonksQuants on stock relationships

STATISTICS ROCKS! Updated post from SuperstonksQuants on stock relationships
Hello from SuperstonksQuants (HomeDepotHank69’s quant group)! For those of you who were not around, u/HomeDepotHank69 posted a series of excellent quant posts a while ago, and formed a group of quant-oriented apes to tackle some difficult questions. His account is now deleted, but we’re trying to keep his spirit alive. Some of us (including myself) have not been active for a few months for a variety of reasons, but there are still a lot of wrinkles there working. I found out recently that my original post on statistics was removed, so I’d like to post an updated version here.
But first, I want to quickly remind everyone to keep up your privacy! Several of my fellow quants got ‘attacked’ on social media after they posted some personal information. As far as I know, nothing too serious has happened, but as we get closer to MOASS they might start using dirtier tactics.
Tldr: Standard correlations (Pearson, Spearman, etc.) are misleading when trying to compare stocks. We found a fairly simple way to fix the issue by transforming the data using differencing first, then running cross-correlations or rolling correlations on the transformed data. A fellow quant ape, u/orangecatmasterrace, created a shiny app so everyone can compare tickers without needing any coding experience: https://orangecatmasterrace.shinyapps.io/stonk\_app/

This is a longgg post, so here is a table of contents of sorts:
1. Introduction to statistics
2. Standard correlations
3. Major issue with standard correlations when looking at stocks
4. Removing autocorrelation from the data
5. Proof of concept
6. Rolling correlations (not included in my original post)
7. Tool so you can do it without any coding!

This is not only my work, but of many other SuperstocksQuants members (particularly u/orangecatmasterrace, u/xpurpleamyx, and several more that requested to remain anonymous). My qualifications - I teach statistics at the university level.
Now, on to STATISTICS!
Introduction to statistics
One major strength of apes is our determination to find the truth from actual data, not relying on fb posts from our uncles. Statistics is just a method for working with data, and to answer the question, how confident are you in the effect? This is a big deal when you are making claims about worldwide illegal activity, so we want to make the tools available to all apes.
I am on the low-level analysis side, and wanted to bring you all up to speed on how to answer questions like: Is X related to Y? This is one of the most common types of questions we receive, so that is what I have been focused on. It turns out, this is more complicated to answer than you might think. If you google how to see if two stocks are related, even Yahoo Finance will tell you to run a correlation. Standard correlations are misleading due to the type of data we are working with.
That being said, I will start with the most common type of correlation, the Pearson’s correlation coefficient, so we can talk about stats in general and then move on to my recommendations on what to use.
Standard correlations
A correlation is used when you have two continuous variables (like two stocks), and you want to see if they are related. The standard correlation combines two things: how well the two variables move together (covariance) and how spread out the data is (variance). THIS IS AMAZING, correlations are seriously clever. However, every statistical analysis comes with assumptions (the fine print) that even peer reviewed publications sometimes miss.
The most common correlation analysis is a Pearson’s correlation coefficient. When you run a Pearson’s correlation, you get two numbers labeled r and p. The r is a value between -1 and 1, the absolute value of which reflects the strength of the relationship, and the sign indicates the direction (positive – as one variable increases, the other increases; negative – as one variable increases, the other decreases). Here are some visual examples (the x and y axis are your two variables, such as GME and XRT).

Source: https://statistics.laerd.com/
As we can see, when two variables move together, their r is large (left graphs) and if two variables are not related, then r is close to 0 (right graphs).
From your r and the size of the dataset, you can get a p-value. This is a value you get from most statistical analyses (not just correlations) that answers that question I mentioned above, how confident are you? The p-value is a number that ranges between 0 and 1. The lower the value, the more confident you can be that the effect exists. For correlations, the lower the value the more sure we are that the two stocks (or whatever) are related.
In more detail, the p-value is the probability that you found the effect due to chance (sort of). That is, there is always some probability that two stocks will vary together just due to random chance, rather than because of some underlying market manipulation. The p-value helps us find that probability.
As your r gets farther from 0, the smaller your p is likely to be. In the graph above, you would find the lowest p-value for the two graphs on the left, slightly larger for the two middle graphs, and a p-value close to 1 for the two graphs on the right.
Major issue with standard correlations when looking at stocks
Let’s try some real data. If I look at the Pearson Correlation between GME and the popcorn stock, I find r = 0.71, p = 2.2 e -16. That’s a p value with more zeros than I even thought possible! Now let’s look at something that should not be correlated, GME and SPY (r = 0.78, p = 2.2 e -16). They are basically the same. They are so similar I had to continually check my code. This is certainly not true for all stocks, I just found it to be a good example. [Note: these values are from my original post and may have changed slightly since then]
Why might this be?
All statistical analyses come with fine print, or assumptions, that must be met for the analysis to work. At the bottom of this linked page, there is a nice list of the assumptions for Pearson correlations, along with explanations: https://statistics.laerd.com/statistical-guides/pearson-correlation-coefficient-statistical-guide.php
The biggest issue with stock data is it breaks the independence assumption (assumption 3). The independence assumption basically says that each observation within a variable should be independent of other observations in the same variable.
If we return to the correlation figure above, typically each ‘dot’ or each data point comes from one ‘participant’. For example, say I have 15 apes and an all-inclusive banana buffet. I measure the size of each ape and how many bananas they eat. I’d likely see something similar to the upper left graph (the larger the ape, the more bananas they eat). A crucial aspect of data like this, is that the number of bananas eaten by one ape does not depend on the number of bananas another ape has eaten. That is, one dot or one data point does not depend on another.
In stock data (or crypto), this is not true. The opening price on one day depends on the price on a previous day. This type of data is called time series data, which just means it changes over time. One way statisticians talk about this is called autocorrelation, which is simply that data points in a variable are correlated with others in that variable (it is correlated with itself).
[DISCLAIMER: I am not an expert in time series data. I have been doing a lot of research lately to try to catch up. If anyone has a better method than the one below, please let me know!]
Removing autocorrelation from the data
There are a few ways to deal with this, but we ultimately decided to remove the autocorrelation from the data and do our statistics on the result. The methods used are not developed by us. They were found from an excellent graduate student statistics class at Penn State (https://online.stat.psu.edu/stat510/lesson/1) and from this online text book by Rob J Hyndman and George Athanasopoulos (https://otexts.com/fpp2/index.html).
We tried A LOT of different methods (filtering data through ARIMA models, transforming data using various functions, etc). The resulting data from many of the methods could only be used specifically for correlations, and there are a lot of other tests we want to do. We decided on a simple transformation that does a great job (though not perfect), called differencing. The result will sometimes have small autocorrelation left in the data, but it is simple, clean, and hopefully I will convince you that it works for our purposes in the next section.
Differencing is typically used to stabilize the mean of a time series, so you can focus on fluctuations (what we are interested in). It is a transform where you subtract the previous observation from the current observation for all data. So difference(t) = observation(t) – observation(t-1). There is a lot more to discuss about this, and I would be happy to answer questions, but here I want to move on to the evidence that this method works (this post is long enough as is).
Proof of concept
Before showing the proof of concept, I want to go over one additional issue with stock data, that the relationship might be delayed in time. For example, if I thought Put OI was related to changes in GME price, that relationship might be delayed a day or two (e.g. high put OI on day 1 results in a change in GME price on day 3). In statistics we call this lag. To check for lag, I use cross-correlation, which is basically running correlations on the two variables while adjusting the lag. The graphs I’m about to talk about all use cross correlation.
Now to proof of concept. If you’ve stuck with me so far, congrats! I know this is a lot.
Let’s look at some graphs. On all graphs below, the left panel shows price data, and the right shows the cross-correlation results. The results are shown as a bar graph with lag on the x-axis (days in this case) and Pearson’s correlation coefficient (r) on the y-axis. If a bar crosses the horizontal blue line, it is significant at the p = 0.05 level.
To help with interpreting the data, I messed around with the mini squeeze from GME in January (happy anniversary!). The following is correlating that data with itself with different lags/transformations, which means that the r = 1 in every result.
https://preview.redd.it/0l40iqniwpe81.jpg?width=1280&format=pjpg&auto=webp&s=036f994ac014c05a61b39301f2674881fc115fdc
Let’s take this one at a time. In the first set, No lag (pos), the two time series are perfectly matched, so we see a large positive r at lag = 0. Likewise for the No lag (neg), but now the time series are exactly opposite from one another, so we see a large negative r at lag = 0. The bottom three sets adjust the lag between the two time series. As we adjust it, we see the largest r at different lags (seen as the large bar at lag = 10 or -10). For all of these, lag indicates the number of days. That is a high r at lag = 1 means the two stocks are related to each other offset by one day.
Coooool cool cool so we know the method (difference the data, then run a cross correlation) captures lag with idealized data, now let’s look at how GME relates to the popcorn stock and SPY like we did with standard correlation to see if all this work is actually worth it. As a reminder, a standard Pearson’s correlation found an extremely strong relationship between GME and popcorn stock (as expected) as well as between GME and SPY (not expected).
https://preview.redd.it/7tvpbtaexpe81.jpg?width=1280&format=pjpg&auto=webp&s=311b17b2a49f709c26e524de08cf393c4d529743
This is what I was hoping for! GME and popcorn stock show a strong correlation at lag 0, yet GME and SPY show very little correlation across the time period. I tested a bunch of other stocks (and simulated data) that shouldn’t be related, so now I feel confident that the method works (if you want more information, please let me know). Here are some additional comparisons I made in my original post just as examples (have not been updated):

Question 1 from homedepothank69
Rolling correlations (not included in my original post)
So far the technique works pretty well to answer the question, are X and Y related? Another question you might want to ask is, at one point in time did X and Y become related? The hedgies use a crazy amount of BS to manipulate stocks, and they seem to change their strategy fairly regularly, so it’s good to know at what time they use one technique or another. This is where rolling correlations come in. Basically, you run a correlation for data between day 1-20, then you run it for data between day 2-21, and so on. In this way, we can find roughly when the two became related.
Tool so you can do it without any coding!
A fellow quant ape, u/orangecatmasterrace, is a wizard at shiny apps and converted our code into an easy to use browser app. Here is that link, as well as his original post on correlations:
https://orangecatmasterrace.shinyapps.io/stonk\_app/
https://www.reddit.com/Superstonk/comments/o1a73z/hi\_apes\_we\_need\_to\_about\_a\_little\_thing\_called/
All data is scraped from yahoo finance when you enter the request, so the data is always up to date. As discussed above, the data is differenced first before running the analysis. They used the code for a rolling correlation, so you can enter any ticker over any time period and find out when they became correlated (if ever). Here is just one example, GME vs. popcorn stock:

https://preview.redd.it/gyvpyviiype81.jpg?width=1889&format=pjpg&auto=webp&s=02d03694c4686d678dbce014020ebde313e8d303
On the left you can enter any tickers you want to compare, as well as the date, along with other preferences. On the right are the results. The top shows the price of the two tickers, and the bottom shows the r value across different dates. So GME and popcorn were not related at all until the mini squeeze in January. After that, they have remained correlated for the whole year.
I won’t discuss any speculation on what all of this means, rather the purpose of this post is to make sure you have the tools necessary to answer the questions you want. I hope all you lovely apes will share any interesting relationships you find!
submitted by braaaaiiinnnsss to Superstonk [link] [comments]


2021.11.10 01:52 Locus_Delicti Basketball Library Links Updated

The last few months I've received a lot of requests for documents I shared many years ago - apparently Google Drive's sharing policies have changed and people have been unable to access them. I've reuploaded the documents to scribd and you can find them below:
Playbooks/Play Analysis
The Complete Book on Basketball's Flex Offense
The Triangle Offense by Tex Winter
NBA Coaches Playbook
Stanford Triangle Offense
Tex Winter's Triangle Post Offense
The Complete Book of Offensive Basketball Drills
The Flex Offense
Basketball Coaching Toolbox
NBA Playbook
Dribble Motion Offense
12 Quick Hitters for the Flex Offense
Oklahoma City Thunder Playbook
2011 NBA Miami Heat Playbook
NCAA Division 1 Playbook
Dallas Mavericks 2011 Playbook
Boston Celtics Playbook
Dribble Drive Motion Offense
Basketball Drills and Practice Plans
Dribble Drive Motion Offense Breakdown Drills
Basketball Fundamentals - Footwork
Breakthrough Basketball Drills
Comprehensive Guide to the Flex Offense
Fresno Dribble-Drive Motion Offense
Special Defense: 1-1-3 Match-Up Zone
Russia's Offensive System
Brazil Offensive System: Season 2008/2009
Basketball Playbooks: Team Offense
The Argentina Offense
Academic Papers
Scoring and Shooting Abilities of NBA Players
CourtVision: New Visual and Spatial Analytics for the NBA
Stratified Odds Ratios for Evaluating NBA Players Based on their Plus/Minus Statistics
Parity and Predictability of Competitions
Simpson’s Paradox and Other Reversals in Basketball: Examples from 2011 NBA Playoffs
The Effect of Early Entry to the NBA
The Price of Anarchy in Basketball
Transitioning to the NBA: Advocating on Behalf of Student-Athletes for NBA & NCAA Rule Changes
An application to spatial statistics to basketball analysis; The case of Los Angeles Lakers from 2007 to 2009.
Racial Bias in the NBA: Implications in Betting Markets
The Legality Of An Age-Requirement In The National Basketball League After The Second Circuit's Decision In Clarett v. NFL
The NBA and the Great Recession: Implications for the Upcoming Collective Bargaining Agreement Renegotiation
Modeling Basketball’s Points per Possession With Application to Predicting the Outcome of College Basketball Games
Choking vs. Clutch Performance: A Study of Sport Performance Under Pressure
Predicting the outcome of NBA playoffs using the Naïve Bayes Algorithms
Performance under Pressure in the NBA
Predicting NBA Games Using Neural Networks
Evaluating Individual Player Contributions in Basketball
A Simple and Flexible Rating Method for Predicting Success in the NCAA Basketball Tournament: Updated Results from 2007
Experts’ Perceptions of Autocorrelation: The Hot Hand Fallacy Among Professional Basketball Players
Offense-Defense Approach to Ranking Team Sports
The Role of Rest in the NBA Home-Court Advantage
A New Approach to Decision Making in Basketball - BBFBR Program
Best 'sweet spots' on the backboard
Hormonal Analysis In Elite Basketball During A Season
"He Got Game" Theory? Optimal Decision Making and the NBA
Basketball game-related statistics that discriminate between teams' season-long success
Evaluating Basketball Player Performance via Statistical Network Modeling
Measurement error and the hot hand
Do genes determine champions?
Allocative and Dynamic Efficiency in NBA Decision Making
Choking and Excelling at the Free Throw Line
Ups and Downs: Team Performance in Best-of-Seven Playoff Series
Decertification: The NFLPA and NBPA's Nuclear Option
Optimal End-Game Strategy in Basketball
Collectively Bargained Age/Education Requirements: A Source of Antitrust Risk for Sports Club-Owners or Labor Risk for Players Unions?
Effort-vs-Concentration-The-Asymmetric-Impact-of-Pressure-on-NBA-Performance
A stakeholder assessment of basketball player evaluation metrics
Whole season variation of free testosterone / cortisol ratio in elite basketball players
Labor Relations in the NBA: The Analysis of Labor Conflicts Between Owners, Players, and Management from 1998-2006
Does the NBA Still Have "Market Power?" Exploring the Antitrust Implications of an Increasingly Global Market for Men’s Basketball Player Labor
National Basketball Association. v. Williams
FIBA Assist Magazine
Issue 1
Issue 2
Issue 3
Issue 4
Issue 5
Issue 6
Issue 7
Issue 8
Issue 9
Issue 10
Issue 11
Issue 12
Issue 13
Issue 14
Issue 15
Issue 16
Issue 17
Issue 18
Issue 19
Issue 20
Issue 21
Issue 22
Issue 23
Issue 24
Issue 25
Issue 26
Issue 27
Issue 28
Issue 29
Issue 30
Issue 31
Issue 32
Issue 33
Issue 34
Issue 35
Issue 36
Issue 37
Issue 39
Issue 40
Issue 41
Misc
FIBA vs North American Rules Comparison
Basketball for Men - 1922
submitted by Locus_Delicti to nba [link] [comments]


2021.10.20 23:04 Shot_Guidance_5354 dwtest, 'closure' error

I'm new to R and find it very hard to understand stackexchange posts so please be understanding

I calculated the deviation of some covered interest rate parity between the KRW and the USD in excel, and am trying to perform a durbin watson test to check if I am on the right path
I did it in excel but am not totally sure of my answer, so I wanted to check on R
I imported the sheet, put the dates in "TIME" and the deviations in "DEVIATION"
I then did a linear regression on (TIME~DEVIATION) and stored that
checked the autocorrelation, and it seems to be around .2 points lower than my durbin watson test on excel, so I wanted to use the dwtest function to check
then I did dwtest(DEVIATION) and I get the $ operator error. I searched this but I don't really understand tbh...I guess my imported data list is an atomic object, so I tried using dwtest[DEVIATION] but then I get the "object of type 'closure' is not subsettable" error so not sure what to do there
Any help is appreciated! Especially if its written in an easy way to understand,,
submitted by Shot_Guidance_5354 to RStudio [link] [comments]


2021.09.01 21:05 ProjectBirds I am willing to pay r expert to create some simple graphs and figures ASAP

Hi rstats. I have some data that requires basic figures and graphs regarding relationships between variables. I am willing to pay someone a couple hundred dollars to produce graphs and figures as soon as possible. The data is in excel. Here is a short list of my overall goals but I will settle for any simple figures, tests, and regressions you can create using this data.
You will get credit in the publication. I will need the r script for each test or figure you produce. You will get paid using venmo, zelle, or cashapp. I am a trustworthy person and I will provide you with my information upon agreement. Serious inquiries only please.
TESTS
- Correlation tests among variables (Raw data)
- Relationship between predictor variables using least squares regressions
- Multivariate tests
Temporal
- ANOVAs between years and variables
- Each temporal scale will have its own logistic regression.
Multivariate
- Non-metric multi-dimensional scaling (MDS) for visual component
- Bray-curtis similarity matrix
- Moran’s I autocorrelation test for adjacent cells
Figures
- Bar Charts of predictor variables across years.
- Box plots and density plots of each variable.
- Scatterplots of two-variables
- Correlations between sample and predictor variables. (use raw data)
submitted by ProjectBirds to rstats [link] [comments]


2021.05.19 13:33 MillennialBets (Short- to medium-term) bear case: the supercycle is here, it might just take too long for our Junes through Septembers to make money (with articles and models!)

Author: u/SeattlesBestTutor(Karma: 158, Created: Sep-2020).
(Short- to medium-term) bear case: the supercycle is here, it might just take too long for our Junes through Septembers to make money (with articles and models!) on vitards
PICTURES DETECTED: this DD post is better viewed in it's original post
Hey everyone,
First-time effort-poster here. I started with $MT and $VALE LEAPs right after the original DD, bought every subsequent dip, got torched with the big-tech sell-off in late February, doubled down on Vitarded stonks, made everything back and more, and am now sitting on 206 options set to expire June through October because that is my area code and I am superstitious.
As an English major by trade, I like to think I'm fairly good at assessing the reliability of market commentary, and the great news is that more or less every reputable source agrees with our thesis over the long term. However, there appear to be significant headwinds emerging very shortly (maybe even this power hour, if not last week's dip!), which could hugely impact the value of options in that June through September range—in other words, the majority of this sub's positions.
My favorite source of information outside this forum is Predictive Analytics Models, which, though it uses equity futures as its preferred instrument, shares a similar philosophy to the Vitarded:
-Led by insider with hella experience
-Trades on macro information
-Long time horizon for plays
-Good at making money
So when the founder—a Swiss ex-hedge-fund guy with an excellent track record—correctly calls a market and commodities peak at the beginning of last week, then writes two articles saying the sky may be falling on us soon, I think it's worth sharing them with y'all.
I'm pretty sure I can't link the articles (either the bot will torpedo any SA links or they'll be behind a paywall), so I'll just summarize each, C&P below and intersperse screenshots of the models they use (which I have trouble interpreting).

If these articles are correct, then we have a few possible courses of action:
-Roll medium-term options out to January 2022 or later on a green day
-Sell them on a green day and re-enter later
-Convert many of them to commons (or sell and buy commons later)

Some possible counterarguments to the models (and therefore selling / rolling) would be:
-Commodity equities have recently not had much of a correlation with either commodity prices or the rest of the S&P
-Earnings are guaranteed to be sky-high this upcoming quarter, if not also the next
-Trying to time things is dum

Especially interested in u/vitocorlene and u/graybushactual916's thoughts on the matter.

-CML

***

ARTICLE ONE

SUMMARY
-Lack of liquidity due to destruction of bank reserves
-Fed expected to tighten policy even more
-Value stocks may be disproportionately harmed
ARTICLE
Systemic Liquidity Seasonality Suggests That Equity Markets Are Due To Tip-Over Within The Next Two Weeks; Yields Likely To Fall As Well, So High-Tech May Outperform Other Sectors
May 16, 2021 5:08 PM ET
This article updates what we wrote about the equity markets four weeks ago at the PAM portal: (“The equity markets head for a window of peak performance in the short-term, as good economic data may tip the Fed Reserve into changing its rhetoric and policy”.) This is we had to say:
But for many participants, including us, this doesn’t feel like a “runaway” bull market. Moreover, some areas of the market - most notably the formerly strong tech cloud stocks - are conspicuously lagging. This bull market is very peculiar. We also make the case that there are fundamental, liquidity issues that are starting to crop up, and the Fed Reserve’s reaction function may be starting to change.
It does look like risk appetite has dwindled, and continues to dwindle in the near term. The connotation is that the big institutional buyers are leaving the tech (high-risk) market, and starting to turn defensive. As long as this is the case, the current market tends to be a low-energy affair, and will tend to be very susceptible to external shocks, like changes in monetary and fiscal policies, and to liquidity flow issues, which we discuss in some detail below.
Original chart in the April 2021 article
https://preview.redd.it/3t4mixyukyz61.png?width=1312&format=png&auto=webp&s=66d1fbf32c0e0e467f45b5808f5afa5f79e8c9d9
This is how the above chart looks now
https://preview.redd.it/lnl1piyvkyz61.png?width=1312&format=png&auto=webp&s=85aa1b31aaec19c6b422aceddf6d347bc285d4c3
We continue the focus on the higher risk, high-tech, biotech and small cap sectors to gauge the appetite of the market for risk, and juxtapose that to the current systemic liquidity flows originating from the US Treasury and the Federal Reserve. We find that these markets are fast approaching a window of peaking performance over the next two to three weeks.
The high-tech sector did underperform, as bond yields continue to ratchet higher, leading to an outperformance of value and small caps. However, bond yields have rebounded in the past several weeks, putting a check on the slide in momentum stocks.
The most significant driver for stock and bond market performance is yet to come into effect, but we expect that to happen before the May month is over. We discussed it briefly in last month’s article, but this month this issue will be front and center.
We wrote about the systemic liquidity tightening in the face of announced, sharp drawdown in the Treasury General Account balance from what was $1.6 trillion early when we last wrote about it, to less than $500 Billion by end of June 2021. We also said that most of these funds will flow to the newly enhanced Fed O/N RRP facility, the likely destination for these funds which need to find a home. And that is what happened.
The TGA drawdown was further exacerbated by the termination of the Supplementary Leverage Ratio (SLR) waivers ended at the end of March, the G-SIB large banks dis eschewed the trouble those deposits bring to their capital ratio calculations. Therefore, term money had to move out, and there aren’t many places to go. The RRP is the safest and less onerous place for these funds to go to, as zero pct rate at this Fed facility was a lot better than the negative term (money) market rates that is prevailing up to this time.
The problem is that TGA flows to RRP facility counteracts liquidity inflows, tightens systemic liquidity; reduces bank reserves, and pushes up volatility (VIX) which undercuts equities, and push long-term yields lower (see chart below).
The VIX is very sensitive to systemic liquidity, especially the Fed's Balance Sheet (e.g., Bank Reserves).
The take-ups at Fed's O/N Reverse Repo facility is building into a tsunami, which expunges Bank Reserve wholesale, after a lag.
There is empirical evidence that the effect of liquidity is transmitted to the SPX via the VIX, but the impact comes only after a long lag.
This is the most insidious part – most investors can’t comprehend the long lags, so are oblivious to the approaching danger.
https://preview.redd.it/dmkcbswxkyz61.png?width=1294&format=png&auto=webp&s=94dc020ac9ae57e97b4d5cb878ae8ed4cfb42a54
The process goes like this: when an investor enters an RRP transaction with the Fed, the Fed sells a security to the investor with an agreement to repurchase that same security at a specified price at a specific time in the future (0 percent in this case). Securities sold under the RRP facility continue to be shown as assets held by the Fed, but the RRP transaction shifts some of the liabilities on the Federal Reserve’s balance sheet, specifically from deposits held by depository institutions (also known as bank reserves) to reverse repos (also on the liabilities side) while the trade is outstanding.
In other words, RRP transactions reduce the stock of bank reserves. It is the change rate of bank reserves which powers the rise and fall of financial asset prices -- which is why the drain towards the O/N RRP facility will hurt financial asset prices at some point. That point is fast approaching (see rectangle in chart above). That inflection point can come as early as the 3rd week of May, or during the last week of May.
There’s other supporting empirical evidence which we will show presentation style below.
These are modeled systemic liquidity factors affecting rise/fall of SPX, 10Yr Yield. Watch Factors Absorbing Bank Reserves (inv, directly tightens liquidity) and Factors Which Supply Reserves. Destruction of bank reserves model is crucial to watch. This model provides an inflection point during the period May 19 – 24 after which equities and bond yields should fall (see chart below).
https://preview.redd.it/kj4lh0yykyz61.png?width=1324&format=png&auto=webp&s=9396c480404e4de2a94f2e05042b9cb66759f80a
Here is a set of liquidity models which we have been showing regularly (see chart below) – the Fed's Balance Sheet and Bank Reserves chart: last Thursday and Friday, there was a massive SPX recovery, but that may just be a head fake. Liquidity seasonality tips over again during the period of May 12 - 19 (post a May 11 peak), so it looks like the sell-off is not done yet.
The VIX is telling us, exactly, that the ongoing blow-off in equities should transition into a tip-over later in the May 17 week. VIX changes hew close to changes in Factors Which Absorb Bank Reserves (brown line, chart below), which was validated by a top inflection on May 11. But other Fed Balance Sheet internals indicate also indicate another May 14 - May 19 peak. We go with the latter date.
https://preview.redd.it/qgzqvfpzkyz61.png?width=1318&format=png&auto=webp&s=eccfcb18fcc5e3df78c427e45fcbdcbe369202d5
Here’s a much bigger liquidity flows picture: there`s a negative covariance between Implied Volatility vs CB-5 central bank-provided aggregate systemic liquidity (see chart below). We have shown versions of this model several times before.
Rising systemic liquidity pushes down Implied Volatility, which strengthens stock values, and vice versa (lag of 8 months).
If the impact of liquidity flow changes continue to apply to changes in equities, then a bottom in equities come in July.
https://preview.redd.it/elbe5xe0lyz61.png?width=1310&format=png&auto=webp&s=998498a54dc30cbbc9262b995fa349a247774d9f
The inflection higher in Building Permits, Housing Starts, Residential Investment., post COVID-19, may still continue to rise . . . . . . but Homebuilders, Home Depot and Lumber prices may have peaked, or are peaking (see chart below). This is an old chart which we have shown several times before to update the outlook on the US housing market.
We are also showing that the equity markets (S&P 500) are sensitive to downward fluctuations of the US housing market. By this measure, equities are topping out. Housing data may again start to perk up on early August – that is when the US housing and materials associated with the industry will start becoming interesting again.
https://preview.redd.it/ghj5uw23lyz61.png?width=1320&format=png&auto=webp&s=55005d4f9f2e80cd982bf9a927cdeee92f3495f4
Summary:
Liquidity seasonality wanes again as we head towards late May and that could last until July. Weaker stock markets and lower yields have historically been associated with declining liquidity flows, and we do not expect it to be different this time around.
But probably the most important aspect is Fed that is more likely than expected to begin walking back its ultra-loose rhetoric sometime soon, as we continue to see upside surprises in Non-Farm Payroll (the April 2021 low data is probably an outlier), in stock market earnings, and in the dramatic acceleration in vaccination and a reopening of the domestic economy.
The last chart below provides an example of how changes in the US Treasury and Fed Reserve policy mix impacts equities and bonds. For the markets, total aggregate liquidity is the net difference of the US Treasury’s debt issuance, and the amount of securities purchased by the Federal Reserve. It is therefore crucial to know if the Fed is tightening while the Treasury continues the pace of its debt issuance. And vice versa.
https://preview.redd.it/w3a4sr54lyz61.png?width=1322&format=png&auto=webp&s=4d0df606ca04abbac437836b87a639d92ea7525b
Several FOMC members have already been floating the idea that a tapering conversation should begin when 75% of the population vaccinated, and all signs point to this occurring sometime before mid-June. The June 16th FOMC meeting is therefore shaping up as providing a potential surprise change in Fed rhetoric and monetary policy.
This has the potential of lowering the total value of securities purchased by the Fed – and that lowers the delta between Fed Purchases of Securities (SOMA) and Treasury’s Debt Issuance. That is of prime significance because that delta determines the direction of Bank Reserve growth. If the Fed tightens, and the Treasury issues more debt, then Bank Reserve growth will slow, and that weakens equities, and lowers bond yields.

***
ARTICLE TWO

SUMMARY
-The commodity supercycle is more or less a sure bet and we're presently in its early stages
-However, it is likely to last long enough that exposure through short- and medium-term options will be challenging
-In the short to medium term, said options might get kneed in the balls due to the same cause as above—diminishing liquidity flows—causes them to peak now (or even last week?)
-But it's possible to forecast with confidence that they'll pick up again September through January.

ARTICLE
Time To Take Profits In Long Commodity Bets; Step Aside Until September As Risk Assets Prices May Moderate On Diminishing Liquidity Flows
May 18, 2021 5:23 AM ET
The last time we wrote about commodities (base metals) was in the March, 2021: (“The commodities “super-cycle” ignites base metals take-off as super-abundant global liquidity is put to work on major infrastructure projects”). We wrote about the possibility of a so-called “super-cycle” in commodities, and showed model work which seem to support the theme. We said:
Looking back all the way to 1972, commodities have never been cheaper relative to the broad stock market. The average of this ratio is 4.1 over 50 years. Today, it sits near 0.5. But it seems poised to soar. What we at PAM did was to juxtapose a vector autocorrelation analysis which shows a possible 15- to 18-year cycle in the ratio between commodities and equities. We show the result below, which illustrates a commodity super-cycle, possibly in the making.
Original chart on the March 2021 article

https://preview.redd.it/fx197seslyz61.png?width=1296&format=png&auto=webp&s=7f322248b1bab6adaa9fe59e8e03351eff23bfec
A commodities supercycle is considered to be a multi-year trend, where a wide range of basic resources enjoy rising prices thanks to a structural shift in in demand versus supply. Typically, what happens, is that supply stagnates or drops for several years as economic demand is itself weak or constant. However, at one point a new business cycle starts, and demand picks up, while supply is unable to immediately react. We believe the global economy is in that situation today.
This is how the chart above looks today.

https://preview.redd.it/8oxkkr6tlyz61.png?width=1318&format=png&auto=webp&s=070d6909e5e4f3553b3c9eb9368fe0f111511699
Simply put, commodities, particularly industrial and base metals, are responding to the perfect storm of drivers. These drivers are primarily supply disruptions followed by a recent rebound in demand for commodities like copper could be defining a larger trend. In addition, central bank stimulus across the globe could be combining with demand factors to spark a commodities super-cycle in the not-distant-future.
Commodities are undoubtedly on the move. Copper is at an eight-year high and lumber has tripled in a year’s time. Is this the start of a commodities super-cycle or are these price moves “transitory” like the Federal Reserve keeps telling us? We believe that the Fed is likely mistaken in the longer-run. Higher inflation will be a component driver of higher commodity prices, maybe not this year, but likely next year.
There is no doubt that what primarily drives commodity prices, in the final analysis, is systemic liquidity. And liquidity is oozing today like there is no tomorrow. Most major central banks have lowered rates and global fiscal policy greatly accelerated spending in order to ease the pain of lockdowns. The U.S. national debt alone has increased by a staggering 16% since the onset of the pandemic. Aside from enabling demand and facilitating purchase of commodities, extreme levels of monetary liquidity also ignite fears for the stability of fiat currencies – and that is particularly true for the US Dollar, the unit of global trade exchange.
There is some evidence that change in the US Dollar is impacted by the change rate of the US Treasury’s debt issuance (but what matters is the change rate of debt, not the change in nominal amounts or levels of debt). See chart below. This relationship is particularly essential in timing the ebb and flow of the exchange rate of the US Dollar, as the change rate of US debt issuance leads by one quarter. That as a given, we can also use this to have a macro view of the future potential changes in the price of commodities, in general.
https://preview.redd.it/g4n937utlyz61.png?width=1302&format=png&auto=webp&s=02f8334d57d2dde23fe61e5b6ad2cf2b1cc18cbd
This is the ultimate liquidity flow for the US financial system. And of course, we know that there is a very strong inverse correlation between the exchange value of the US Dollar and the price of commodities (see chart below). That’s the linkage between fiscal monetary policy and the price of commodities – it runs through the global medium of exchange – the US Dollar.
https://preview.redd.it/dzmhfqkulyz61.png?width=1296&format=png&auto=webp&s=ca576dbf70d6c554f6cb4f42107b5ea818442349
Right off the bat, we say that with the intention of the US Treasury to moderate the issuance of debt over the next two quarters . for reasons that we explained in detail in the bond article in April, ("Bonds’ And Equities’ Big Picture: Bond Yields Should Peak Soon, Now That The SLR Issue Has Been Resolved; Equities May Switch Back To Positive Covariance With Yields"), we are facing some moderation in the previously frenetic rise in prices of commodities, especially in base metals, over the next two quarters.
We bolster this moderating view by reposting the development in China’s Total Social Financing – the lead indicator of what to expect in anything that has to do with commodity prices (see two charts below).
https://preview.redd.it/iqau65fwlyz61.png?width=1306&format=png&auto=webp&s=8ac65ad11a8a336a573d4cfb75a67bf02060659f
China’s growth rate has slowed down from the frenetic rate generated about a decade ago, but it is still the single, largest consumer of commodities in the world today. The Chinese growth rate may have declined, but the volume of raw materials and resources need to sustain current growth (in a comparatively larger economy) is still comparable to those obtaining a decade ago.
That growth rate is tightly regimented by the ruling Chinese Communist Party (CCP) and is discussed and ratified in December of every year – and it is laid out 5 years in advance. This schedule of budget expenditures is implemented in real-time by via Total Social Financing (TSF). That is what makes TSF a very potent lead indicator of what China’s PMI manufacturing will likely do in the future (see chart above). That in turn is a lead indicator of what commodities (base metals in particular) will likely do in the near-future (see chart below).
https://preview.redd.it/v3nwyq4xlyz61.png?width=1310&format=png&auto=webp&s=605403d9a6d5dd15547ff37348d126e255e7a05f
There is a larger macro picture if we are to use China’s fiscal policy as yardstick to determine the future outlook for commodity prices. The People Bank of China (PBoC) also publishes its Net Lending/Borrowing intentions (of 5 years ahead). China’s budgetary expenditures are the low-frequency expression of these lending and borrowing intentions. To extend the analogy further, the TSF is the high-frequency expression of these intentions (see chart below).
https://preview.redd.it/naplgmoxlyz61.png?width=1296&format=png&auto=webp&s=fe82f1c7999f3847306cfbd7b18cddaf46163ee8
Note that the correlations work with a long lag between the primary driver data and its impact on the commodity price underlying (between 9 months to 1 year). That is not a disadvantage, as these correlations have become almost deterministic. It is possible therefore to have a commodity outlook almost one year in advance. We have been using this TSF-based forecasting method, and have had phenomenal success so far.
Summary:
If we put all of these drivers together (the fiscal liquidity flows, US Dollar outlook, TSF future development), we see that commodity prices should peak this month of May, moderate significantly until September, and then rally again to a top sometime in Q1 2022, as global growth takes a pause (see chart below). The outlook on base metal miners should be similar.
https://preview.redd.it/8u2js0kylyz61.png?width=1266&format=png&auto=webp&s=12118d08611289bd82718a1c336f2c398a15210a
Next year should see global growth moderate from whatever high point it may achieve in 2021, the year of recovery. That is not merely an economic forecast – it is a mathematical truism, and a lagged function of the large economies' fiscal expenditures (see chart above). We should expect commodity prices (especially the China sensitive resources, like base metals) to moderate as well. So commodity investors should cash in on commodity-based profits NOW, bide your time, and then get back into the market by September. We will be there with you, guiding you all the way.
**\*
TickerDatabase entries updated:
BOND
CB
ET
FARM
MT
PAM
SA
submitted by MillennialBets to MillennialBets [link] [comments]


2021.05.19 01:07 SeattlesBestTutor (Short- to medium-term) bear case: the supercycle is here, it might just take too long for our Junes through Septembers to make money (with articles and models!)

(Short- to medium-term) bear case: the supercycle is here, it might just take too long for our Junes through Septembers to make money (with articles and models!)
Hey everyone,
First-time effort-poster here. I started with $MT and $VALE LEAPs right after the original DD, bought every subsequent dip, got torched with the big-tech sell-off in late February, doubled down on Vitarded stonks, made everything back and more, and am now sitting on 206 options set to expire June through October because that is my area code and I am superstitious.
As an English major by trade, I like to think I'm fairly good at assessing the reliability of market commentary, and the great news is that more or less every reputable source agrees with our thesis over the long term. However, there appear to be significant headwinds emerging very shortly (maybe even this power hour, if not last week's dip!), which could hugely impact the value of options in that June through September range—in other words, the majority of this sub's positions.
My favorite source of information outside this forum is Predictive Analytics Models, which, though it uses equity futures as its preferred instrument, shares a similar philosophy to the Vitarded:
-Led by insider with hella experience
-Trades on macro information
-Long time horizon for plays
-Good at making money
So when the founder—a Swiss ex-hedge-fund guy with an excellent track record—correctly calls a market and commodities peak at the beginning of last week, then writes two articles saying the sky may be falling on us soon, I think it's worth sharing them with y'all.
I'm pretty sure I can't link the articles (either the bot will torpedo any SA links or they'll be behind a paywall), so I'll just summarize each, C&P below and intersperse screenshots of the models they use (which I have trouble interpreting).

If these articles are correct, then we have a few possible courses of action:
-Roll medium-term options out to January 2022 or later on a green day
-Sell them on a green day and re-enter later
-Convert many of them to commons (or sell and buy commons later)

Some possible counterarguments to the models (and therefore selling / rolling) would be:
-Commodity equities have recently not had much of a correlation with either commodity prices or the rest of the S&P
-Earnings are guaranteed to be sky-high this upcoming quarter, if not also the next
-Trying to time things is dum

Especially interested in u/vitocorlene and u/graybushactual916's thoughts on the matter.

-CML

***

ARTICLE ONE

SUMMARY
-Lack of liquidity due to destruction of bank reserves
-Fed expected to tighten policy even more
-Value stocks may be disproportionately harmed
ARTICLE
Systemic Liquidity Seasonality Suggests That Equity Markets Are Due To Tip-Over Within The Next Two Weeks; Yields Likely To Fall As Well, So High-Tech May Outperform Other Sectors
May 16, 2021 5:08 PM ET
This article updates what we wrote about the equity markets four weeks ago at the PAM portal: (“The equity markets head for a window of peak performance in the short-term, as good economic data may tip the Fed Reserve into changing its rhetoric and policy”.) This is we had to say:
But for many participants, including us, this doesn’t feel like a “runaway” bull market. Moreover, some areas of the market - most notably the formerly strong tech cloud stocks - are conspicuously lagging. This bull market is very peculiar. We also make the case that there are fundamental, liquidity issues that are starting to crop up, and the Fed Reserve’s reaction function may be starting to change.
It does look like risk appetite has dwindled, and continues to dwindle in the near term. The connotation is that the big institutional buyers are leaving the tech (high-risk) market, and starting to turn defensive. As long as this is the case, the current market tends to be a low-energy affair, and will tend to be very susceptible to external shocks, like changes in monetary and fiscal policies, and to liquidity flow issues, which we discuss in some detail below.
Original chart in the April 2021 article
https://preview.redd.it/3t4mixyukyz61.png?width=1312&format=png&auto=webp&s=66d1fbf32c0e0e467f45b5808f5afa5f79e8c9d9
This is how the above chart looks now
https://preview.redd.it/lnl1piyvkyz61.png?width=1312&format=png&auto=webp&s=85aa1b31aaec19c6b422aceddf6d347bc285d4c3
We continue the focus on the higher risk, high-tech, biotech and small cap sectors to gauge the appetite of the market for risk, and juxtapose that to the current systemic liquidity flows originating from the US Treasury and the Federal Reserve. We find that these markets are fast approaching a window of peaking performance over the next two to three weeks.
The high-tech sector did underperform, as bond yields continue to ratchet higher, leading to an outperformance of value and small caps. However, bond yields have rebounded in the past several weeks, putting a check on the slide in momentum stocks.
The most significant driver for stock and bond market performance is yet to come into effect, but we expect that to happen before the May month is over. We discussed it briefly in last month’s article, but this month this issue will be front and center.
We wrote about the systemic liquidity tightening in the face of announced, sharp drawdown in the Treasury General Account balance from what was $1.6 trillion early when we last wrote about it, to less than $500 Billion by end of June 2021. We also said that most of these funds will flow to the newly enhanced Fed O/N RRP facility, the likely destination for these funds which need to find a home. And that is what happened.
The TGA drawdown was further exacerbated by the termination of the Supplementary Leverage Ratio (SLR) waivers ended at the end of March, the G-SIB large banks dis eschewed the trouble those deposits bring to their capital ratio calculations. Therefore, term money had to move out, and there aren’t many places to go. The RRP is the safest and less onerous place for these funds to go to, as zero pct rate at this Fed facility was a lot better than the negative term (money) market rates that is prevailing up to this time.
The problem is that TGA flows to RRP facility counteracts liquidity inflows, tightens systemic liquidity; reduces bank reserves, and pushes up volatility (VIX) which undercuts equities, and push long-term yields lower (see chart below).
The VIX is very sensitive to systemic liquidity, especially the Fed's Balance Sheet (e.g., Bank Reserves).
The take-ups at Fed's O/N Reverse Repo facility is building into a tsunami, which expunges Bank Reserve wholesale, after a lag.
There is empirical evidence that the effect of liquidity is transmitted to the SPX via the VIX, but the impact comes only after a long lag.
This is the most insidious part – most investors can’t comprehend the long lags, so are oblivious to the approaching danger.
https://preview.redd.it/dmkcbswxkyz61.png?width=1294&format=png&auto=webp&s=94dc020ac9ae57e97b4d5cb878ae8ed4cfb42a54
The process goes like this: when an investor enters an RRP transaction with the Fed, the Fed sells a security to the investor with an agreement to repurchase that same security at a specified price at a specific time in the future (0 percent in this case). Securities sold under the RRP facility continue to be shown as assets held by the Fed, but the RRP transaction shifts some of the liabilities on the Federal Reserve’s balance sheet, specifically from deposits held by depository institutions (also known as bank reserves) to reverse repos (also on the liabilities side) while the trade is outstanding.
In other words, RRP transactions reduce the stock of bank reserves. It is the change rate of bank reserves which powers the rise and fall of financial asset prices -- which is why the drain towards the O/N RRP facility will hurt financial asset prices at some point. That point is fast approaching (see rectangle in chart above). That inflection point can come as early as the 3rd week of May, or during the last week of May.
There’s other supporting empirical evidence which we will show presentation style below.
These are modeled systemic liquidity factors affecting rise/fall of SPX, 10Yr Yield. Watch Factors Absorbing Bank Reserves (inv, directly tightens liquidity) and Factors Which Supply Reserves. Destruction of bank reserves model is crucial to watch. This model provides an inflection point during the period May 19 – 24 after which equities and bond yields should fall (see chart below).
https://preview.redd.it/kj4lh0yykyz61.png?width=1324&format=png&auto=webp&s=9396c480404e4de2a94f2e05042b9cb66759f80a
Here is a set of liquidity models which we have been showing regularly (see chart below) – the Fed's Balance Sheet and Bank Reserves chart: last Thursday and Friday, there was a massive SPX recovery, but that may just be a head fake. Liquidity seasonality tips over again during the period of May 12 - 19 (post a May 11 peak), so it looks like the sell-off is not done yet.
The VIX is telling us, exactly, that the ongoing blow-off in equities should transition into a tip-over later in the May 17 week. VIX changes hew close to changes in Factors Which Absorb Bank Reserves (brown line, chart below), which was validated by a top inflection on May 11. But other Fed Balance Sheet internals indicate also indicate another May 14 - May 19 peak. We go with the latter date.
https://preview.redd.it/qgzqvfpzkyz61.png?width=1318&format=png&auto=webp&s=eccfcb18fcc5e3df78c427e45fcbdcbe369202d5
Here’s a much bigger liquidity flows picture: there`s a negative covariance between Implied Volatility vs CB-5 central bank-provided aggregate systemic liquidity (see chart below). We have shown versions of this model several times before.
Rising systemic liquidity pushes down Implied Volatility, which strengthens stock values, and vice versa (lag of 8 months).
If the impact of liquidity flow changes continue to apply to changes in equities, then a bottom in equities come in July.
https://preview.redd.it/elbe5xe0lyz61.png?width=1310&format=png&auto=webp&s=998498a54dc30cbbc9262b995fa349a247774d9f
The inflection higher in Building Permits, Housing Starts, Residential Investment., post COVID-19, may still continue to rise . . . . . . but Homebuilders, Home Depot and Lumber prices may have peaked, or are peaking (see chart below). This is an old chart which we have shown several times before to update the outlook on the US housing market.
We are also showing that the equity markets (S&P 500) are sensitive to downward fluctuations of the US housing market. By this measure, equities are topping out. Housing data may again start to perk up on early August – that is when the US housing and materials associated with the industry will start becoming interesting again.
https://preview.redd.it/ghj5uw23lyz61.png?width=1320&format=png&auto=webp&s=55005d4f9f2e80cd982bf9a927cdeee92f3495f4
Summary:
Liquidity seasonality wanes again as we head towards late May and that could last until July. Weaker stock markets and lower yields have historically been associated with declining liquidity flows, and we do not expect it to be different this time around.
But probably the most important aspect is Fed that is more likely than expected to begin walking back its ultra-loose rhetoric sometime soon, as we continue to see upside surprises in Non-Farm Payroll (the April 2021 low data is probably an outlier), in stock market earnings, and in the dramatic acceleration in vaccination and a reopening of the domestic economy.
The last chart below provides an example of how changes in the US Treasury and Fed Reserve policy mix impacts equities and bonds. For the markets, total aggregate liquidity is the net difference of the US Treasury’s debt issuance, and the amount of securities purchased by the Federal Reserve. It is therefore crucial to know if the Fed is tightening while the Treasury continues the pace of its debt issuance. And vice versa.
https://preview.redd.it/w3a4sr54lyz61.png?width=1322&format=png&auto=webp&s=4d0df606ca04abbac437836b87a639d92ea7525b
Several FOMC members have already been floating the idea that a tapering conversation should begin when 75% of the population vaccinated, and all signs point to this occurring sometime before mid-June. The June 16th FOMC meeting is therefore shaping up as providing a potential surprise change in Fed rhetoric and monetary policy.
This has the potential of lowering the total value of securities purchased by the Fed – and that lowers the delta between Fed Purchases of Securities (SOMA) and Treasury’s Debt Issuance. That is of prime significance because that delta determines the direction of Bank Reserve growth. If the Fed tightens, and the Treasury issues more debt, then Bank Reserve growth will slow, and that weakens equities, and lowers bond yields.

***
ARTICLE TWO

SUMMARY
-The commodity supercycle is more or less a sure bet and we're presently in its early stages
-However, it is likely to last long enough that exposure through short- and medium-term options will be challenging
-In the short to medium term, said options might get kneed in the balls due to the same cause as above—diminishing liquidity flows—causes them to peak now (or even last week?)
-But it's possible to forecast with confidence that they'll pick up again September through January.

ARTICLE
Time To Take Profits In Long Commodity Bets; Step Aside Until September As Risk Assets Prices May Moderate On Diminishing Liquidity Flows
May 18, 2021 5:23 AM ET
The last time we wrote about commodities (base metals) was in the March, 2021: (“The commodities “super-cycle” ignites base metals take-off as super-abundant global liquidity is put to work on major infrastructure projects”). We wrote about the possibility of a so-called “super-cycle” in commodities, and showed model work which seem to support the theme. We said:
Looking back all the way to 1972, commodities have never been cheaper relative to the broad stock market. The average of this ratio is 4.1 over 50 years. Today, it sits near 0.5. But it seems poised to soar. What we at PAM did was to juxtapose a vector autocorrelation analysis which shows a possible 15- to 18-year cycle in the ratio between commodities and equities. We show the result below, which illustrates a commodity super-cycle, possibly in the making.
Original chart on the March 2021 article

https://preview.redd.it/fx197seslyz61.png?width=1296&format=png&auto=webp&s=7f322248b1bab6adaa9fe59e8e03351eff23bfec
A commodities supercycle is considered to be a multi-year trend, where a wide range of basic resources enjoy rising prices thanks to a structural shift in in demand versus supply. Typically, what happens, is that supply stagnates or drops for several years as economic demand is itself weak or constant. However, at one point a new business cycle starts, and demand picks up, while supply is unable to immediately react. We believe the global economy is in that situation today.
This is how the chart above looks today.

https://preview.redd.it/8oxkkr6tlyz61.png?width=1318&format=png&auto=webp&s=070d6909e5e4f3553b3c9eb9368fe0f111511699
Simply put, commodities, particularly industrial and base metals, are responding to the perfect storm of drivers. These drivers are primarily supply disruptions followed by a recent rebound in demand for commodities like copper could be defining a larger trend. In addition, central bank stimulus across the globe could be combining with demand factors to spark a commodities super-cycle in the not-distant-future.
Commodities are undoubtedly on the move. Copper is at an eight-year high and lumber has tripled in a year’s time. Is this the start of a commodities super-cycle or are these price moves “transitory” like the Federal Reserve keeps telling us? We believe that the Fed is likely mistaken in the longer-run. Higher inflation will be a component driver of higher commodity prices, maybe not this year, but likely next year.
There is no doubt that what primarily drives commodity prices, in the final analysis, is systemic liquidity. And liquidity is oozing today like there is no tomorrow. Most major central banks have lowered rates and global fiscal policy greatly accelerated spending in order to ease the pain of lockdowns. The U.S. national debt alone has increased by a staggering 16% since the onset of the pandemic. Aside from enabling demand and facilitating purchase of commodities, extreme levels of monetary liquidity also ignite fears for the stability of fiat currencies – and that is particularly true for the US Dollar, the unit of global trade exchange.
There is some evidence that change in the US Dollar is impacted by the change rate of the US Treasury’s debt issuance (but what matters is the change rate of debt, not the change in nominal amounts or levels of debt). See chart below. This relationship is particularly essential in timing the ebb and flow of the exchange rate of the US Dollar, as the change rate of US debt issuance leads by one quarter. That as a given, we can also use this to have a macro view of the future potential changes in the price of commodities, in general.
https://preview.redd.it/g4n937utlyz61.png?width=1302&format=png&auto=webp&s=02f8334d57d2dde23fe61e5b6ad2cf2b1cc18cbd
This is the ultimate liquidity flow for the US financial system. And of course, we know that there is a very strong inverse correlation between the exchange value of the US Dollar and the price of commodities (see chart below). That’s the linkage between fiscal monetary policy and the price of commodities – it runs through the global medium of exchange – the US Dollar.
https://preview.redd.it/dzmhfqkulyz61.png?width=1296&format=png&auto=webp&s=ca576dbf70d6c554f6cb4f42107b5ea818442349
Right off the bat, we say that with the intention of the US Treasury to moderate the issuance of debt over the next two quarters . for reasons that we explained in detail in the bond article in April, ("Bonds’ And Equities’ Big Picture: Bond Yields Should Peak Soon, Now That The SLR Issue Has Been Resolved; Equities May Switch Back To Positive Covariance With Yields"), we are facing some moderation in the previously frenetic rise in prices of commodities, especially in base metals, over the next two quarters.
We bolster this moderating view by reposting the development in China’s Total Social Financing – the lead indicator of what to expect in anything that has to do with commodity prices (see two charts below).
https://preview.redd.it/iqau65fwlyz61.png?width=1306&format=png&auto=webp&s=8ac65ad11a8a336a573d4cfb75a67bf02060659f
China’s growth rate has slowed down from the frenetic rate generated about a decade ago, but it is still the single, largest consumer of commodities in the world today. The Chinese growth rate may have declined, but the volume of raw materials and resources need to sustain current growth (in a comparatively larger economy) is still comparable to those obtaining a decade ago.
That growth rate is tightly regimented by the ruling Chinese Communist Party (CCP) and is discussed and ratified in December of every year – and it is laid out 5 years in advance. This schedule of budget expenditures is implemented in real-time by via Total Social Financing (TSF). That is what makes TSF a very potent lead indicator of what China’s PMI manufacturing will likely do in the future (see chart above). That in turn is a lead indicator of what commodities (base metals in particular) will likely do in the near-future (see chart below).
https://preview.redd.it/v3nwyq4xlyz61.png?width=1310&format=png&auto=webp&s=605403d9a6d5dd15547ff37348d126e255e7a05f
There is a larger macro picture if we are to use China’s fiscal policy as yardstick to determine the future outlook for commodity prices. The People Bank of China (PBoC) also publishes its Net Lending/Borrowing intentions (of 5 years ahead). China’s budgetary expenditures are the low-frequency expression of these lending and borrowing intentions. To extend the analogy further, the TSF is the high-frequency expression of these intentions (see chart below).
https://preview.redd.it/naplgmoxlyz61.png?width=1296&format=png&auto=webp&s=fe82f1c7999f3847306cfbd7b18cddaf46163ee8
Note that the correlations work with a long lag between the primary driver data and its impact on the commodity price underlying (between 9 months to 1 year). That is not a disadvantage, as these correlations have become almost deterministic. It is possible therefore to have a commodity outlook almost one year in advance. We have been using this TSF-based forecasting method, and have had phenomenal success so far.
Summary:
If we put all of these drivers together (the fiscal liquidity flows, US Dollar outlook, TSF future development), we see that commodity prices should peak this month of May, moderate significantly until September, and then rally again to a top sometime in Q1 2022, as global growth takes a pause (see chart below). The outlook on base metal miners should be similar.
https://preview.redd.it/8u2js0kylyz61.png?width=1266&format=png&auto=webp&s=12118d08611289bd82718a1c336f2c398a15210a
Next year should see global growth moderate from whatever high point it may achieve in 2021, the year of recovery. That is not merely an economic forecast – it is a mathematical truism, and a lagged function of the large economies' fiscal expenditures (see chart above). We should expect commodity prices (especially the China sensitive resources, like base metals) to moderate as well. So commodity investors should cash in on commodity-based profits NOW, bide your time, and then get back into the market by September. We will be there with you, guiding you all the way.
**\*
submitted by SeattlesBestTutor to Vitards [link] [comments]


2020.07.29 21:29 zyxal13 Robustness

Hi guys, I have to do a multivariate regression (time series) for my class and I have a problem. I don't know how to check for robustness of my variables. My professor said I should use the Newey-West method, but since I'm a beginner in econometrics, I don't know how. I've done my regression in Excel. Can I use the Newey-West Test in Excel? Am I right in assuming that robustness means that I'm checking for autocorrelation and heteroskedasticity? If my variables happen to not be robust, what do I do then?
I could really use your help. Thanks in advance.
submitted by zyxal13 to econometrics [link] [comments]


2019.08.21 05:04 Clara_mtg Public Policy expert finds that rich people are more likely to live in expensive neighborhoods, blames Foreigners

Before I start this R1 I just want to note that someone else wrote a critique here. Unfortunately that critique is a bit crap and rather misses the forest for the trees. Data critiques are rather pointless in the face of fundamental methodological and statistical failures. Also I hate anthing that ever criticizes something as a fallacy. It drives me absolutely nuts.
I had intended for this R1 to be longer but I gave up trying to find data to actually analyze the situation. If anyone knows of/has data and wants to let me have at it I’d love to. I’m trying to get some stats projects in my portfolio and the more of them that are relevant to people other than nerds the better.
This is version two of this R1. My first versions was much much less tactful and probably not appropriate for this sub. Unfortunately much of structure has been lost so I apologize for the incredibly poor organization. I'm not very good at putting thoughts together in an organized manner even during the best of times.
The thrust of this paper is that the weirdness in the relationship between prices and income in Vancouver is due to the amount of foreign ownership. In particular underreporting of income that leads to higher price to income ratios.
The housing market is an incredibly complicated market. It exhibits almost every difficulty you could come up with: Spatial autocorrelation, normal autocorrelation, really complicated causal relationships, lots of causal interplay among various factors, clustering effects and probably lots more I didn’t think of.
Attempting to analyze the housing market with single point in time estimates is a fool’s errand. Doubly so if you don’t understand basic economics. Tripely so if you don’t understand how regressions work.
Let’s get the really obvious critique out of the way. Regressions with n=6, 14 or 23 don’t count especially when said data points are not independent of each other.
It is important, then, to step back and grasp which factors could not account for this pattern, or are very unlikely to do so. Consider a few factors that have been offered up in the housing debate to account for Vancouver’s affordability woes: development charges and other “supply constraints”, lax mortgage lending, and low interest rates. None of these causal factors could plausibly explain the divergence in ratios between municipalities.
Let’s take a look at the author’s justification for this:
If lax mortgage lending or interest rate differences were driving the divergence, then we would expect that mortgage lending policy and interest rates varied sharply across municipalities. But the idea that Burnaby has a different mortgage lending regime than Surrey, say, or has much lower interest rates, is implausible and not supported by any available evidence.
Why should we expect variances in mortgage lending practices be uniform across the income distribution? It seems a pretty reasonable assumption to me that a banks willingness to lend to someone depends on their income and the distribution of incomes across municipalities is not constant.
The idea that development charges or restrictions (e.g., permit times) could account for the pattern is similarly implausible. Development charges do not typically apply to building (or rebuilding) a detached house, which is the most that could be at play given that almost none of the municipalities can build any net new detached houses (due to the Agricultural Land Reserve). Thus it’s hard to see how development charges could have a substantial effect on detached house prices.
Housing prices do not exist in a vacuum. Condos, apartments and other types of housing are substitute goods and their prices can (and do) affect the prices of single detached homes.
Differing permit times are also unlikely to have any substantial effect: buyers are not going to pay massively more if the permit time for a new build is 6 months instead of 3. In fact, it is unlikely to be a significant factor at all.
sigh Those extra three months to get a permit cost money. If it costs more to build a house then the price of said house will likely increase. In addition the distribution of permit times is right skewed with things like apartment complexes taking longer to get approved which affects a larger amount of housing than a permit for a single family unit or condo.
What might account for the divergence then? If substantial amounts of foreign money were used to purchase housing, then that might generate such a pattern, since the declared Canadian incomes of this international elite might have little relationship to buyers’ purchasing power.
It is worth remembering that wealth has a large effect on people’s ability to purchase housing. This doesn’t really undermine the author’s point though.
The relationship between foreign ownership and de-coupling is depicted in Figure 3 for 2016. The correlation is 0.96. (1 is a perfect correlation, 0 is the absence of any correlation.)
There is nothing wrong with this I just want to put it out here so y’all can get a sense of the kind of paper we’re working with.
This is a remarkably strong relationship: the vast majority of the variation in price to income ratios can be accounted for by foreign ownership.
If you have an R2 = .93 when analyzing a population that we know to have pretty significant variation this should raise red flags. If one factor explains this much you're probably regressing left shoe on right shoe. Which is almost assuredly what is happening here. P/I ratio depends a lot on the types of housing. Foreign owners buy more expensive housing on average which tends to be concentrated in specific areas. This concentration raises the prices of single family detached homes without an increase in the median income of the same magnitude for two reasons. First because the relationship between housing prices and income is not linear and second because single family detached homes and other types of housing are imperfect substitutes so changes in one will not induce the same change in the other.
A common rejoinder is that “correlation is not causation”, but that is unlikely to be a valid critique here
No. No no no. 2SLS exists for a reason. Determining causation is very difficult especially in a market as complicated as a housing market. Attempting to do so with a point in time sample is insane. There could be all kinds of unidentified cofounders, reality is complicated and we’re not omniscient.
We have a good causal theory for the relationship to exist, and there does not seem to be any other plausible contending factors that might account for the pattern, as noted above.
Ignoring the fact this is false, saying that there are no other plausible confounding factors does not make it true. It certainly would make statistics dramatically easier if we could just think of all of the possible causal links and control for them. Especially if we just prax away all endogeneity.
the face of all this evidence, a skeptic might reply: “Well sure, you’ve found the relationship in Vancouver, but it might be spurious – maybe there’s something else driving that relationship.”
And here we come so close to enlightenment. But no such luck.
As explained above, this is highly unlikely, given the strength of the relationship and the absence of any plausible alternative factors.
The relationship is somewhat weaker (r = 0.76) than in Vancouver, likely due to the weaker relative influence of foreign ownership in Toronto, but the connection remains strong and unmistakable.
This doesn’t make any sense. Less foreign influence should result in different data. I don’t see why it should result in a different slope or a worse correlation. Why should we expect foreign ownership to effect Toronto differently than Vancouver? As far as I can tell they have similar laws.
The City of Toronto is also an outlier.14 If the City of Toronto is removed from the scatterplot, the correlation increases to r = 0.88.
No.
NO NO NO NO
Dropping 👏 Outliers 👏 without 👏 good 👏 reason 👏 is 👏 not 👏 acceptable.
It makes my regression worse is not a good reason. It is pure hackery.
You should never drop an outlier without discussion. This doesn’t mean it’s never appropriate to perform an analysis after having dropped an outlier but reality isn’t pretty and outliers do exist.
This may reflect amalgamation, which has the effect of pooling many lower income renters (who typically live in apartments) with higher income detached homeowners, thus boosting the price to income ratio.
Why is this the only point in the paper when the author considers the distribution of types of housing in the sample districts? The types of housing people live in is obviously not constant across municipalities and clearly affects the prices of housing yet it is completely ignored in the rest of the paper.
We should be thankful for his [Richard Wonzy] candor and insight, we need more of that today.
Oof. I certainly agree that we need more insight although not exactly in the manner that the author means.
There are quite a few issues with this paper that I didn’t address primarily data issues but I found it very difficult to write a good criticism of a statistical analysis that is so poor as to be incoherent. The assuredly are many issues I did not cover but the statistical rigor on display is so poor that I cannot tell exactly what is meant, how the data is gathered or even what the data is. To be entirely honest I’m not entirely clear on what this paper is saying. It seems to cycle through a number of different ideas around foreign ownership and housing prices but never quite settles on one thought.
As with many things, this paper would benefit significantly from being precise in the question it is trying to answer. Too often questions may sound similar but in actuality be very different and a failure to be precise enough in your questions can easily lead to poor statistical analysis.
I apologize that the above R1 is a bit disjointed and poorly organized. I wanted to just do the analysis correctly but I was unable to find adequate data.
I would like to finish this off by discussing some actual results on this issue by people that know what they’re on about.
Suher 2016.pdf) finds that non resident owners have a significant impact on housing prices but with very little spillover. This is very important because the study presumes significant spillover effects from foreign buyers. Without significant spillover we should expect the price changes to be localized to locations that are desirable to non residential owners. These are high price areas and as such will not change the median home price. Thus any analysis that is conducted about the effects of non residential ownership needs to be conducted at a lower level than the municipal level because heterogeneity in housing will make municipal level data rather useless.
As a partial counterpoint to the above Fisher Füss and Stehle find spillover effects from regular housing transactions on the neighborhood level although these effects are diminished in booming housing markets like the one that Vancouver is currently experiencing. Again this shows the importance of being careful when analysing the situation because heterogeneity in housing will destroy your data if you don’t look closely.
This report talks about the difficulties of talking about housing affordability. In particular “Caution, however, should be used in using this measure assess affordability challenges among different income levels or household types as variations in the cost of other necessities would suggest the need for corresponding variations in the payment standard used” is important. We see a pattern emerging. Housing is rather heterogeneous and clumping different types of housing together can often lead to misleading or nonsensical results.
This AEI presentation covers limitations of the price to income metric of affordability and presents an alternative metric that takes into account other affordability constraints on housing beyond the literal cost of housing.
This paper by Dragana Cvijanovic and Christophe Spaenjers is an excellent example of how to go about answering questions like those posed in the report. It finds results more or less in line with what you’d expect. Foreign non resident owners cause price increases in attractive (luxury) areas.
When I first read this paper I dismissed as a combination of incompetence and hackery but going back through this and learning more about the context of this discussion this paper makes me mad. Not just because it's crap, if I got mad at every crap paper I wouldn't have enough time to sleep. It makes me mad because of how it legitimizes many xenophobic viewpoints. It sticks everything on foreigners. It ignores all other posibilities prefering instead to focuse on the evil other. Xenophobic writing like this is a more insideous kind of xenophobia. The blatently xenophobic is easy to dismiss. But stuff like this, hiding behind a veil of science, is much harder to dismiss. How do you explain to a lay audience the complications of statistics? This particular paper is bad enough that you probably could explain it but stuff like this happens all the time and often in much more sophisticated ways.
I would like to talk about outliers for a second. This isn't really relevant to the R1 but they came up a bit and I want to clear some stuff up.
An outlier that happens because of sampling error, measurement error, etc is not an outlier. These are just not valid data points and no analysis should be performed with those data points.
In general outliers should not be ignored. Their existence does matter. Often it suggests that there is something that you did not consider in your model. Sometimes outliers can cause significance where none would otherwise exist. Others may not cause or remove significance but change it. There is no magical formula for what to do about outliers. It depends on the context of your analysis. Sometimes it is appropriate to drop them for a variety of reasons but this should not be done without reason.
If you don’t want to read this incredibly poorly organized mess here’s a TL;DR:
2SLS exists for a reason and no matter how hard you prax you can’t prax an entire flowchart DAG out of thin air.
submitted by Clara_mtg to badeconomics [link] [comments]


2019.08.14 08:48 agoodperson44 Is there a software package/library (excel or python) that performs FULL analysis on user inputted time-series data?

Is there some package where you input time series data and it will analyse everything about it.
E.g. you input time series data and It outputs: distribution, stationarity, autocorrelations, summary statistics, best-fit distribution, most optimal forecasting model (ARCH, MA, etc), statistical test results like dicky fuller, unit root tests, etc......
Does something like this exist as a library in python or an add-in for excel?
submitted by agoodperson44 to econometrics [link] [comments]


2018.11.09 21:00 DangerDylan [Friday, 09. November]

World News

Saudi Arabia: Tiger Woods turns down largest ever overseas pay cheque to play in Saudi Arabia
Comments Link
Mexico's new government wants to legalize marijuana, arguing that prohibition has only helped fuel violence: “We don’t want more deaths."
Comments Link
Journalist Husnu Mahalli Gets Jail Term for Calling Erdogan 'Dictator'
Comments Link

All news, US and international.

Expert: Acosta video distributed by White House was doctored
Comments Link
Utah man dies from rabies; first in state in 74 years
Comments Link
No gunshots fired at NC high school, officials blame malfunctioning water heater
Comments Link

Science

US cigarette smoking rate reaches new low - Cigarette use among American adults is at the lowest it's been since the CDC started collecting data on the issue in 1965, down to 14% from over 40% in the mid-1960s, according to a new report.
Comments Link
Ancient DNA confirms Native Americans’ deep roots in North and South America
Comments Link
Social media use increases depression and loneliness - In the first experimental study of Facebook, Snapchat, and Instagram use, a University of Pennsylvania psychologist showed a causal link between time spent on the platforms and decreased well-being
Comments Link

Technology

Sprint is throttling Microsoft's Skype service, study finds.
Comments Link
In news that will shock absolutely no one, America's cellphone networks throttle vids, strangle rival Skype - Net neutrality probe finds it's not the end of the world, though
Comments Link
Google confirms dark mode is a huge help for battery life on Android
Comments Link

Sadly, this is not the Onion.

Retirement home alcohol ban slammed as 'un-Australian'
Comments Link
It's Fall, Which Means It's Time for Gonorrhea
Comments Link
Rescued 6-foot emu and feisty donkey are in love, creating trouble for NC shelter
Comments Link

Ask Reddit...

What old insults need to make a comeback?
Comments
What's the biggest fuck-up you have witnessed?
Comments
[Serious] What is the creepiest unexplained experience you ever had?
Comments

Sysadmin

The Value of IT
Comments
Got a raise, promotion, full shift freedom, and I couldnt be sadder
Comments
Reboots are good - to hell with uptime
Comments

Microsoft SQL Server

Found this super handy. Test connectivity to SQL Server without a udk, powershell or SSMS installed.
Comments Link
So, I have inherited a SQL environment...
Comments
Issue with SSMS - Can't connect to specific server
Comments

PowerShell

Send an email to the manager, when a user changes their Office 365 photo
Comments
Using Named Regex Matches to Build PSCustomObjects
Comments Link
[FREE] The Advanced PowerShell Scripting Crash Course 2019 Udemy
Comments

Functional 3D Printing

Guitar tuner attachment for electric drill
Comments Link
Those hooks to hang my bluetooth speaker in the shower
Comments Link
Clip-together smartphone mount for a rowing machine
Comments Link

Data Is Beautiful

How Green is Your State? [OC]
Comments Link
Seeing autocorrelation: Comparing IMDB ratings & number of movies per year by genre [OC]
Comments Link
The Republican Party's drastic 30 year transformation. [OC]
Comments Link

Today I Learned (TIL)

TIL: Mel Brooks put on a prosthetic 11th finger for adding his hand print on the Hollywood Walk of Fame.
Comments Link
TIL that Minnesota is the only state that requires any U.S. Flag sold in the state to be manufactured in U.S.A.
Comments Link
TIL Hitler had France surrender in the same railway carriage at the same spot France and England made Germany surrender in WWI
Comments Link

So many books, so little time

Young Charlotte Bronte manuscripts unseen for 200 years published for first time
Comments Link
Instead of setting a goal to read 'x amount of books in a year', try setting a goal of reading 30+ minutes a day, every day!
Comments
Amazon's AbeBooks backs down after booksellers stage global protest
Comments Link

OldSchoolCool: History's cool kids, looking fantastic

Alfred Hitchcock impersonating Ringo Starr, 1964.
Comments Link
A young boy getting a visit from his hometown heroine Debbie Harry. (1980)
Comments Link
Airline Hostesses circa 1970 .
Comments Link

aviation

Dream come true
Comments Link
Dog defends runway from Mustang attack
Comments Link
Airmen ready B-52H Stratofortresses during Global Thunder 19 at Minot Air Force Base, N.D.
Comments Link

Reddit Pics

This is what democracy looks like
Comments Link
Greyhound racing was banned in Florida yesterday. Regardless of your views on racing, this means about 8000 hounds will be looking for homes in the coming months. This is such a derp as an example. Open your home to one.
Comments Link
My boyfriend doesn’t have a lot of money for decorations so I painted him some!
Comments Link

.gifs - funny, animated gifs for your viewing pleasure

Excellent parenting
Comments Link
Water ricochet
Comments Link
Protest to protect the Mueller investigation in NYC tonight
Comments Link

A subreddit for cute and cuddly pictures

My son has loved my cat since the day he was born. She tolerates that love in a way I never thought possible.
Comments Link
Gave him a forever home yesterday and thought he hated me since he didn’t even look at me. Woke up to him next to me like this.
Comments Link
He was born with a downvote
Comments Link
submitted by DangerDylan to DangerDylanTLDR [link] [comments]


2018.07.27 17:59 backgroundmusic95 Can I have some help with normalizing a hotspot map?

I'm looking to normalize a hot spot map that I'm creating by point density-- I'm analyzing the region of answers to a survey for a political response map. Essentially survey responses of yes and no are correlated to lat-long data. I want to find locations where the yes and no responses are proportionally high geographically speaking. To do this, I employed a hot spot analysis. Problem is that the hot spots are skewed to center between the urban centers I have under my extent. I want to remove this skew to give more weight to the rural areas. In essence, I believe I have to normalize based on point density (there are more responses from areas oh greater population).
Here are my steps to create the hotspot map:
Integrate (tolerance is 500 meters)
Collect events
incremental spatial autocorrelation (input is ICOUNT)
Hotspot analysis
IDW Interpolation based on GZScore
I want to remove that skew between the two cities. I know I have to do that somewhere in the beginning before the hotspot analysis.
My data is in the form of latitude longitude excel documents for each survey response: for example "Question 1 Yes" is about 4000 rows separated into Lat/Long columns to give XY coordinates
any help is appreciated!
submitted by backgroundmusic95 to gis [link] [comments]


2017.04.06 01:00 ninjarubo [University Statistics] Aitkens GLS

snip of excel: https://gyazo.com/643e4b08918b7daf6a1fbad02c9e2739
im trying this aitken gls i just learned and the numbers are changing drastically, from ols to ridge regression its understandable, but i can't explain why gls changes it this much.
I had Autocorrelation and Homoscedasticity.
Now i need to find the estimators, but I think I made a mistake, because theyre too different.
Is this how GLS works?
submitted by ninjarubo to HomeworkHelp [link] [comments]


2017.03.07 02:19 white_lightning [University Statistics] Need help understanding Autocorrelation

I am working on an excel lab for a fisheries biology class, and just ran a statistical catch-at-age model. At the end of the lab, I have some questions and one of them is this:
How might autocorrelation between fishing mortality, population size, and catch affect our estimates of population size? Think about the equation we use, and the relationship between large and small numbers (Hint: 2 x 50 = 100, as does 50 x 2)
And I feel like I am missing an understanding of what autocorrelation is so I can answer this question. I can supply the equation in question if necessary, but if someone could just help me understand the concept for autocorrelation, I am sure I can work out a good answer myself!
submitted by white_lightning to HomeworkHelp [link] [comments]


http://swiebodzin.info