Blog

There are a number of existing tutorials that outline how to deploy an R Shiny App with Docker. R is great programming language for performing statistical analysis – the community is rich with packages that are finding new uses every day. Shiny is a web frame work for R – you can create a UI for end users to interact with your code in a web browser. Docker is a platform that can be used to deliver software packages (and dependencies) consistently across different machines. The benefit is that you can have another computer with docker run/reproduce your code without it breaking. Our use case here is having our R code that runs locally also run on a cloud web-server. I highly recommend checking out the documentation – for understanding the essential details. This guide will highlight the pitfalls and workarounds I ran into when setting up a shiny app…

Read more

Facebook regularly receives flack for political advertising scandals. YouTube got its share of criticism during the Adpocalypse in 2017 (albeit for other reasons). Twitter changed its ad policy in late 2019 to preemptively avoid backlash, and now globally prohibits political advertisements. We can speculate, that prior to this change political groups of all sorts (think: politicians, political action committees, think tanks, lobbying groups, and corporations used the Twitter ads platform to further policy agendas. Easily Accessible Information and Similar Audiences There aren’t necessarily political audience targeting lists available by default in the Twitter Ads platform – but Twitter Ads allow you to target follower look alike audiences, which in turn allows advertisers to target people with similar interests to an account’s followers. So hypothetically, by targeting some political figure’s follower look alike handle you’d have the ability to show ads to people with a high likelihood of being in agreement…

Read more

super-bowl-ad-under-10-thousand-dollars

The Super Bowl. It’s one of the most watched television broadcasts in the United States. It attracts anywhere from 90 million to 114 million viewers every year. Gary Vee calls Super Bowl ads under priced in today’s market even with their exorbitant cost of a few million dollars. But do Super Bowl Ads really need to cost a few million dollars? No, here’s how to get them at a discount. Getting a Super Bowl Ad at Discount To be clear a national Super Bowl ad will cost you a few million dollars. But what TV salespeople don’t want you to know is that you can buy Super Bowl ads that will only air in local television markets. 1 – Find a DMA with under 500,000 people So, the first thing you want to do is find a designated market area (DMA) with under 500,000 people. You won’t get the entire…

Read more

3 Brands – Experiment with the Largest – Adapt Winning Elements for the Other Two Today I’m taking a look into Tapestry Inc’s recent branding and product campaigns. Tapestry is a holding company for 3 luxury fashion brands – Coach, Kate Spade, and Stuart Weitzman. When it comes to media spend, it looks like Coach receives the majority of ad dollars and has the largest overall social following. The Kate Spade New York and Stuart Weitzman brands may receive a smaller portion of ad spend but Tapestry is aggressively growing these brands on Facebook and Instagram. They experiment with the Coach Campaign and then transfer over and adapt the successful activations to the Kate Spade and Stuart Weitzman Campaigns. Influencer Marketing In the past 2 years, Coach has heavily relied on influencer marketing. Coach has created the “Coach Fam”, a group of diverse celebrities and influencers to serve as ambassadors…

Read more

Twitter offers tools to analyze tweet performance of your own accounts via twitter analytics and the accounts of others via tweetdeck. Unfortunately, the platforms have limited data export functionality; there is not a clean easy way to export data from the web user interface. The method outlined in this post avoids Twitter API fees and is compliant with TOS if you download html files locally. You can download local html copies of the webpages found at analytics.twitter.com and tweetdeck.twitter.com that are associated with your account. The full code can be found on GitHub. There are 4 other files that you’ll want to include in the same directory where you downloaded the html files: twitter_scraping_css.json twitter-analytics-main.r twitter-tweetdeck-main.r Helpers.R twitter_scraping_css.json This json file contains css selectors templates for targeting relevant structured data in twitter analytics and tweetdeck html files. The css selectors for tweetdeck are written generically with a %s parameter in…

Read more

Almost everyone is familiar with the phrase “location, location, location” when evaluating a property. Being mindful of location when purchasing out of home advertising placements is key to running successful outdoor campaigns. Python and the Here.com API can assist in location research when purchasing outdoor media (billboards, bulletins, and digital signage) remotely or when there are many locations to research. The full code an be found on GitHub. This post outlines a method for querying nearby places corresponding to a set of geographical coordinates associated with out of home media placements. Researching nearby places can enable you to determine whether the location is brand friendly and consistent with your campaign messaging and target audience. There are 3 files that you’ll need: credentials.yaml boards.csv proximity_ooh.py credentials.yaml The file credentials.yaml is where you’ll specify your app id and app credentials to authorize access to the Here.com API. boards.csv The file boards.csv should…

Read more

This post outlines a framework for forecasting short term (i.e. daily tick data) directional movements of equity prices. The method used here relies on support vector machines and treats the system like a Markov Chain. Historical data is downloaded from stooq.com. This is not investment advice or recommendation of an investment strategy but provided for educational purposes only. The following code comes with no warranties whatsoever. The code which can be found in its entirety on GitHub, attempts to model the directional movement (i.e. above or below the previous close) of the closing price of a stock on the following variables: return of the equity at lag = 1 return of the equity at lag = 3 return of SPY at lag = 1 return of SPY at lag = 3 return of QQQ at lag = 1 return of QQQ at lag = 3 return of UVXY at lag…

Read more

The Google DV360 platform is making great strides in its feature set and usability. The platform allows many bulk actions to be performed via the user interface, however, targeting unique audiences and geographies is still a very manual process. This code aims is to increase efficiency when targeting multiple unique list combinations or unique geographies. These methods are particularly applicable when testing audience lists and when localizing by geographies. The full code can be found on GitHub. This code operates in 2 steps. The first being fetching default SDF templates that you pre specify in a dummy insertion order. The second step creates new SDFs using your template in addition to a modifier file where you will specify line item names and audience list ids or geography ids. In order for the first step to work you must create a Google service account and add it as a user to…

Read more

This post overviews code for testing multiple variants using Bayesian statistics. It is especially useful when there is a defined action that can be classified as a success (i.e. a click or conversion) and when we have complete information about the number of trials (i.e. impressions). Full code can be found on GitHub. This code relies heavily on simulation produced by sampling beta distributions given parameters for trials and successes. No additional packages are necessary outside of base R. First we define helper functions that act as wrappers for the rbeta and qbeta functions in base R. RBetaWrapper will be used for sampling and QBetaWrapper will be used for calculating credibility intervals. Our MVTest function accepts number of simulated draws, a trial vector, a success vector, a vector of quantiles to be used in calculating credibility intervals, and an integer specifying how many digits credibility intervals should be rounded. The…

Read more

This post discusses how to use polynomial regression for digital advertising data. Polynomial regression can help us better understand the relationship between spend and impressions (or spend and clicks). This method can be particularly useful when looking at daily data with variability in daily spends. Models can be used to analyze, estimate, and benchmark performance of future campaigns. The full code can be found on GitHub. The code used here uses a second order polynomial function to allow for diminishing marginal returns. For impressions the function takes the form of: Or in the case of clicks: To run this code begin by importing the ggplot2, scales, and rio packages. First we define a function to fit a second order polynomial regression given two variables. This function also creates a ggplot object that maps a scatter plot of actual observations along with a regression line of predicted values. Next we define…

Read more

10/11