They don't require any assumptions between independent and dependent variables and work in non-linear environment. Was it consistent? In this case, lexicon-based sentiment analysis of tweets on Windows 10 revealed that only 24% of the users had a positive opinion. This is where the recommendation systems come into play. Most high-school dropouts see the result of their decision to leave school very clearly in their earning potential. Designing a financial hardship offer involves changing certain terms of an existing loan contract to make debt payments more affordable to borrowers in financial distress. On the basis of the estimated property tax given by the models, we have found that the governmental property taxation needs some correction. The objective of this project is to implement Collaborative Filtering on huge data set of 500,000 ratings in distributed environment on Apache Spark. Based on exploratory data analysis, 70% of voters never changed their votes, and 20% of voters changed at least once in last three elections. The dataset that is analyzed lists all real estate transactions in Cincinnati from 1998 to 2009. Kala Krishna Kama, Network Intrusion Detection System Using Supervised Learning Algorithms, August 2015, (Dungang Liu, Yichen Qin) The client is an American grocery retailer in Cincinnati, Ohio. The data was collected from 1994 census database by selecting records between ages of 16 and 100 and applying few more filters. Fluctuating demand with high variance needs to be fulfilled daily on time without declining any requesting flight to keep reputation of the company on high stature. Finally, the usage of profanity in popular music has skyrocketed in the last two decades, showing that music has not only got more negative but also more vulgar. ("Bust-out fraud white paper" 2009 Experian Information Solutions, Inc.) I used GBM as my modelling technique for predicting fraud accounts. We also examine a number of position-specific metrics to measure player quality. These tools became part of ongoing processes used by the City; in fact, the city thereafter specifically used dashboards that I created to discuss issues in their “stat meeting”. The data was obtained from Kaggle. For example, the hospital requires that 90% of all cleaning requests should be completed within 60 minutes. In this study, we developed a statistical framework for meta-analysis of differential gene co-expression. As a Data Science Analytics Intern, I directly work with the Finance Analytics team on multiple projects. Time series analysis is commonly used to analyze and forecast economic data. The model is developed to predict which previously purchased products will be in a user’s next order. The “CACM Score Model” project is a team project for building models to group defaulted accounts into three score bands and design experiments for the placement strategy of different score band accounts. Leverage is used to identify the outliers. Accurate demand forecasting is essential for operational planning. But humans are often still better at answering questions about opinions, recommendations, or personal experiences. Our target is to recognize malignant tissue by knowing the dimension (mean, standard error and the worst) of it. Although research has documented differences in growth rates between dialysis patients and healthy children, no large-scale effort has been devoted to the development of growth charts specifically for children with ESRD. This expected value is then modeled to test for linearity, as well as to find coefficients to value each starting situation. Random Forest model presented the lowest out of sample model MSE: 19.22 and the least difference between in sample MSE and out of sample MSE. Then the transformed data will be input into Cox survival models and will be compared according to different PCA methods. Hence, the User-based collaborative filtering method will help businesses recommend better products to their customers and thus improve their customer experience. Having sight of this helps different teams and departments to prepare a plan of action for the coming years. The VARMAX model allows for multivariate forecasting and takes advantage of information contained in the time-series of forecasting variables. On the other hand, overestimating demand results in surplus of inventory incurring high carrying costs. The open-ended question of "what do these data tell us?" Khiem Pham, Optimization in Truck-and-Drone Delivery Network, April 2019, (Leonardo Lozano, Yiwei Chen) Estimatingfuturevalueofacustomerisoneofthecorepillarsinmarketingstrategy. Optimal production levels minimize labor and raw material costs, and proper inventory management minimizes material handling and stockout costs. Because of Globalization, people all around the world are now able to access different kinds of music. Once the license plate is detected, it would undergo processing, and the text data can easily be edited, searched, indexed and retrieved. Season ticket holders (STH) are important for both collegiate and professional sports teams. Dataset can be considered as a modernized and expanded version of Boston Housing dataset which was used to build several regression models during Data Mining class. This provides us a bird’s eye view of the user perceptions about the fine foods and also acts as a powerful feedback mechanism for and retailers, which they could use to make immediate corrections and improve their products and services. The decision problem is to predict an optimal mailing size or cutoff in a future mailing to maximize profit. The reorder point is essentially the right time to order a stock considering the lead time to get the stock from the supplier and the safety stock available. The data is collected from Barclays data ware house with almost 139 variable for 12 months. The dataset contains transaction level data provided by nearly 30 banks, across 24 months, with an approximate size of 3+ million records. Mansi Verma, 84.51o Capstone Project, August 2017, (Michael Fry, Mayuresh Gaikwad) They tend to have positive and negative influences respectively not only in an organization’s success but also among the existing workforce in the organization. Various statistical methods have been used to find the best model as per the data. Model comparisons are done on the basis of the cost of misclassification with a case study of two scenarios. This project models the path of a patient through two systems. North-East region which includes Macy’s flagship store Herald square has the highest LOS amongst all stores and this region NE has a positive relation with LOS. This project explores different ways to forecast unit sales of products with the objective of zeroing in on the model with the least error. A further study to confirm their relationship to the survival time of GBM and possible mechanism would contribute tremendously to the understanding of GBM. To tackle this, combination of two Time Series models is selected as the final solution. The classification goal is to predict if the client will subscribe to a term deposit (variable y). Therefore, our results show that GSRS has the highest prediction accuracy, SVD next and UBCF last. With 2 convolutional layers, I could achieve a classification accuracy of 89% on Street View House Numbers dataset. The insights obtained from the analysis will help the Customer care team at Priceline redesign and optimize the policies for each of the cancel reasons. The BLINEX loss function is a parsimonious loss function with three parameters, a bounding parameter, a scaling parameter, and a asymmetry parameter. Many mathematical models have been developed to try to control risk in an investment portfolio, with one of the most widely used models being the value-at-risk (VaR) model. The model with highest predictive accuracy for multiple categories is Random Forest, while the performance of predictive models for single category “Family” do not have significant differences. Image recognition has been an important part of technological research in the past few decades. Also, it is shown in the example of top selling products demonstrating which product will follow before and after its purchase using left hand and right-hand association rules. This project investigates the forest-cover change during the period 1989-2005 in the region, with the combined use of remote-sensing satellite images, geographic information systems (GIS), and data-mining techniques. Lindner College of Business While both of these variables and their interaction terms are statistically significant, the regression models explain only a small amount of the observed variance. Zarak Shah, Bank Loan Predictions, July 2019, (Yichen Qin, Edward Winkofsky) Abhishek Chaurasiya, Tracking Web Traffic Data Using Adobe Analytics, August 2016, (Dan Klco, Dungang Liu) This potentially leads to a tremendous undertaking that is in effect useless, due to a lack of implementing a market analysis before work begins. The savings achieved by redistricting will be considered with respect to reduction in total miles, fuel costs, student time spent, and safety. I will focus on analyzing STH renewals for the University of Cincinnati’s Football team. Traditionally, music was generated by talented artists and was deemed to be a skill owned by a select few who had the sense of creativity and the skill in the area. Oil Prices can depend on many factors leading to a volatile market. The reorder point prediction would reduce the frequency of ordering and would help the floor managers in making better reorder plans. The fraudster makes on-time payments to maintain a good account standing, with the intent of bouncing a final payment and abandoning the account. It was performed after the First Presidential debate to capture the mood of people on social media, the tweets were classified as positive, negative and neutral and a sentiment score was calculated for each of the presidential candidates. Motorists in AGE3 (25-30) have a higher chance of a Meniscus knee injury. FIFA 21 . Our data set comes from a retail company that has hundreds of stores, each of which contains hundreds of business departments. It is an important component in the profitability of an organization. This project provides a measure of efficiency by performing a super-efficiency data-envelopment analysis (SEDEA) on hospitals from Ohio and Kentucky that perform percutaneous cardiovascular surgery with drug-eluting stents. Silky Abbott, Simulation of the Convergys Contact Center in Erlanger, KY, July 24, 2012 (David Kelton, Jaime Newell) Through this, we hope to identify pertinent information that results in better user engagement which would ultimately result in increased advertising revenue. Ultimately, the four models developed predicted the "Default"/"No default" correctly over 75% of the time in the training sample and over 60% in the testing sample. Breast cancer is the one of the most common types of cancer in women in the United States, ranking second among cancer deaths. After examining the results from different machine learning models, we conclude that the results using XGBoost model are promising: we achieve an accuracy rate of 81.5%. The project will present recommendations as a solution to reduce churn. Hang Cheng, An Analysis of the Cintas Corporation's Uniform Service, December 5, 2013 (David Rogers, Yichen Qin) The ability to predict the peak time of the day and the day of the week can allow the businesses to manage them in a more efficient and cost-effective manner. After customer segmentation, profiling of the customer segments was done using demographic and socioeconomic variables. Certain non-branded keywords such as "shred" and "extinguish" are highly used in the web search, and Tuesday, Wednesday, and Thursday are the days with the most visits. This kind of analysis is extremely useful for the companies which can tailor and time its content based on the analyzed data to generate maximum readership on its site thereby generating more revenue. The popularity of an article can be determined by the number of views or shares it receives from people. Companies invest a lot of resources in developing database systems that store voluminous information on their customers. The dataset provided has 8523 records and 11 predictor variables. A team wins by being the first to destroy a large structure located in the opposing team's base, called the "Ancient". Apoorv Joshi, Predicting Realty Prices Using Sberbank Russian Housing Data, July 2017, (Dungang Liu, Liwei Chen) Achat de Cuero, Gaya, Hojbjerg, Martinez, Yesil pour 15M€ Kryzy tu valides ? Jamie H. Wilson, Fine Tuning Neural Networks in R, April 2018, (Yan Yu, Edward Winkofsky) By using a combination of efficient data pre-processing, ideal modeling of input parameters, and iterations of clustering analysis, the study attempts to discover the best solution with the desired properties. Linxi Yang, Analysis of Feedback from Online Healthcare Consultation with Text Mining, July 2017, (Peng Wang, Liwei Chen) The objective of this study is to use the energy data to build a model that can predict the Energy Star Score of a building and interpret the results to find the factors which influence the score. The structural breaks have been searched with a practitioner approach based on the time series modeling minimal regression RSS (Residual sum of squares) which is described in this paper (hereinafter referred to as “Minimum RSS search”). David A. Pasquel, Operational Cost-Curve Analysis for Supply-Chain Systems, November 2, 2011 (David Rogers, Amitabh Raturi) In this project we have data from a telecommunication company, and we try to determine the reasons for the customer churn and build a predictive model to give the probability of customer churn with the given data. Non-coherent output: “the soiec and the coned and the coned and the cone”. Fei Xu, Multi-period Corporate Bankruptcy Forecasts with Discrete-Time Hazard Models, December 5, 2013 (Yan Yu, Hui Guo [Department of Finance]) Then the time-varying regression coefficients are further studied based on time-series analysis with a Box-Jenkins model. Guansheng Liu, Development of Statistical Models for Pneumocystis Infection, July 2018, (Peng Wang, Liwei Chen) Open-Access is a system of prescribed changes in clinic scheduling and operating procedures that is designed to bring NP Lag down to near-same-day access or near an NP Lag of zero. These methods make neural networks good at finding complex nonlinear relationships amongst predictor and response variables as well as interactions between predictor variables. The goal of this project will be to study and apply machine learning techniques to identify whether a comment is toxic or not. The technique of term frequency – inverse document frequency is used to create keywords across each description and later create a TF-IDF matrix. There are 5822 observations in the training data set and 4000 observations in the testing data set. The results and findings from the analysis provide management with a better data driven approach and solution to make policies and decisions regarding the fate of the campaign/product. We aim at processing data and getting it to a form that is presentable. Hence, Natural Language Processing applications like sentiment analysis help companies to improve the online ecommerce experience for their users and also to extract insights from insights from unstructured information such as customer reviews. The objective of this project is to explore the application of different risk modeling techniques along with techniques to tackle class imbalance on financial lending data in order to maximize expected returns while minimizing expected variance or risk. Each algorithm brings its own pros and cons to the machine learning community and many have similar uses. With the acquisition of First Niagara Bank in 2016 Key Bank acquired $2.6b Indirect Auto Portfolio. This model is then validated on the test data set and the results are checked for the performance of the model. These algorithms have higher interpretability and they help us understand the significance of different variables in the analysis. Through this paper, I wish to provide researchers the ability to utilize machine learning with Python. Based on the nature of request, they are directed to a respective department. The benchmarking involved computing cash to non-cash transaction ratio for all the countries, etc. It also collaborates with 300 CPG (consumer packages goods) Clients by driving awareness, trail, sales uplift, earned media impression and ultimately customer loyalty. Random Forest model was used to identify the trigger factors and also predict the high donors on the prospect population. Some are continuous and some are categorical. Credit cards have become an integral part of our financial system and most of the people use it for their daily transactions. Argus possesses transaction, risk, behavioral and bureau sourced data that covers around 85-95% of all the banks in the US and Canada. Specifically, this study is designed to answer the question "whether current personnel and emergency equipment resources assigned to fire stations is able to meet the increasing demand of fire and medical-emergency response service." The purpose of this capstone is to investigate the basic concepts of Convolutional Neural Networks in a stepwise manner and to build a simple CNN model to classify images. The e-commerce company allows customers to “commit to purchase” an item and they charge the seller a fee (commission for sale) when this happens. The process involves fetching data from the Catalyst Reporting Tool (CaRT) using queries and then creating the required input file for Tableau dashboard through data manipulation using SAS. Santosh Kumar Molgu, SMS Spam Classification using Machine Learning Approach, July 2016, (Dungang Liu, Peng Wang) Therefore, proper production planning and inventory management are key components of a successful functional business. The goal is find cardholders who have frozen accounts due to a returned payment and classify them as "good" or "bad" as defined by the company. A simple feature extraction technique was employed to process the raw data, and then various machine learning algorithms were applied for multi-class classification. The BG/NBD model was first introduced by Fader, Hardie and Lee in 2004 for predicting expected future transactions and survival probability for customers in a non-contractual setup. The goal of the project is to analyze the geo-spatial relationship between stores and customers, identify trade area, and optimize store assignment in the direct mail marketing campaigns. Measuring quality of user experience is of paramount importance to web platforms like Cerkl. Using analytics to select the right set of customers within a target group helps businesses optimize costs and focus on the right set of customers. For each of these two types, fixed-effects models and random-effects models, as well as the corresponding methodology are discussed. The statistics for subpopulations, such as states and machinery categories, are also calculated using domain analysis. The supply and consumption of renewable energy resources is expected to increase significantly over the next couple of decades. In such a negative situation, it is important to build and use models to estimate the potential risk and to try to maximize profits from credit-card use. Nidhi Mavani, How Can We Make Restaurants Successful Using Topic Modeling and Regression Techniques, July 2017, (Dungang Liu, Liwei Chen) The aim is to help understand which variables are important and can be used to potentially change the weights of the previously developed model or how the calculation of the grades can be restructured, as well as finding any applicable models that can be used to alternatively calculate the grade. James Andrew Kirtland III, Simulation Efficiency of the Finitized Logarithmic Power Series, August 27, 2009 (Martin Levy, David Kelton In order to achieve this, Tensorflow library has a host of pre-built methods, which can be used directly. After cross validating with various samples, it has been concluded that the logistic regression model predicts the loan status more accurately than the classification tree. A combination of R, SAS, and SPSS Modeler were used to conduct the analysis. Through this project, Market Basket Analysis and Association Rules are explored using the dataset available on Using the company's data, an Excel-based tool was developed to: 1. It uses Maximum Likelihood Estimation to estimate the conditional probability distributions. This paper will summarize investigations into this hypothesis using Bayesian change-point (or "breakpoint") analysis. Its revenue model adopts Open Music Model with $9 monthly unlimited subscription fees. The results prove that not grouping the special events as a single entity yields a more reliable model and thus those predicted results were given to the Zoo for additional analysis. All conclusions drawn from this research are justified by proper statistical analysis. One of the important factors of any marketing campaign is the selection of the target group. Rohit Khandelwal, Comparison of Movie Recommendation Systems, July 2017, (Yan Yu, Peng Wang) The objective of this study is to compare different methods for binary predictions and highlight the accuracy from each method. Outputs are analyzed and summarized by PERL and ports to SAS and R. Non-parametric (NP) Kernel Density Estimation (KDE) is implemented, investigated, and compared with parametric modeling methods such as Gaussian distribution fitting. Carlos Alberto Isla-Hernández, Simulating a Retail Pharmacy: Modeling Considerations and Options, November 24, 2010 (David Kelton, Alex Lin [UC College of Pharmacy]) This paper aims to describe the implementation of a movie recommender system via Content based, Collaborative filtering and Hybrid algorithms using python. We use the estimated fractional demand and overall demand to create from 1 to N assortments and assign each store a single assortment in order to maximize revenue. Therefore, Group LASSO can select variables for models containing both continuous and categorical variables. As technology has progressed, new ways of modeling credit risk have emerged including credit risk modelling using R and Python. SMS spam is relatively new and differs from email in terms of nature of communication and the availability of features. With ever-increasing point of sale information at retail stores, it is important to utilize these data to understand customer behavior and factors that drive them to purchase items. The question we are trying to answer here is whether we can differentiate between the two types of wine without looking at their appearance. Geran Zhao, Call Center Problem Category Prediction and Call Volume Forecasting, July 2016, (Yichen Qin, Yan Yu) We focus on observable, partially connected Markov chains that are normalized to become row stochastic. Data wrangling and pre-processing was performed in R to clean the data and convert it to a state that was ready for analysis. This capstone focuses on using completely quantitative data to predict the performance level of future rookie running backs in the NFL. The model used a combination of primary shipper data and literature values for transit-time probability distributions and freight cost variables to investigate the impact of shipper reliability and mean delivery times on logistics costs and service levels. Several avenues for future research are also identified. The tested techniques were Multiple Regression, Regression Trees, and Additive Models. The year of sale and the neighborhood are entered as random effects and all other predictors are evaluated as fixed effects. Then, from the deterministic model, a scenario-based stochastic model that assumes varying processing times is developed. A very deep model is developed to classify if a leaf is healthy, rusted, scabbed, or has multiple diseases. Analysts tend to concentrate on voluntary churn, because it typically occurs due to factors of the company-customer relationship which companies control, such as how billing interactions are handled or how after-sales help is provided. Priceline offers lower rates to its customers on certain deals which are non-cancellable by policy. The results from the word embedding approaches are found to be very promising and can provide a scalable solution for this task. We compare the solutions from the optimal and heuristic approaches; difficulties and challenges of each method are also discussed. Also we built a scoring model to score each assignee on a 100-point scale based on task completion times. Summary statistics are extracted for numeric variables. However, for finalizing a model only metrics from the test dataset were used. This project focuses on consumer buying behavior in retail grocery stores across the United States. The images are processed to obtain the pixel information by standardizing and rescaling. This will help the company design targeted marketing campaigns to cross-sell products. The report also goes on to identify frequently used contextual words that can be added to the lexicon to improve the parsing of emotions. Every citizen expects prompt service from police, and the police department wants to draw satisfaction from citizens with resource management and other tools. Also, the process is governed by time-related business rules that allow the data to be sent to the next vendor only after a certain period. Phishers exploit users’ trust on the appearance of a site by using webpages that are visually similar to an authentic site. Tanu Seth, American Sign Language Hand Gesture Recognition: Application of different Machine Learning Algorithms for Image Classification, August 2020, (Yan Yu, Liwei Chen). The goal of this report is to introduce process mining not only as a technique but also as a method. Pengzu Chen, Churn Prediction of Subscription-based Music Streaming Service, August 2018, (Dungang Liu, Leonardo Lozano) This calls for the need to build risk profiles for each and every loan disbursed on these peer to peer platforms. Predicting the credit defaults with higher accuracy can save considerable amount of capital to financial services. In this project, both regression and classification models are built to find the “best” model by comparing their prediction accuracy. Finally, results from Association Rules Analysis suggest important cross-selling opportunities for Private Label Fettuccini and Ragu Cheese Creations Alfredo Sauce. With material costs forming about 70% of the finished-product sale price, unpredictable manufacturing lead times eat away what is left of the profit margin because there is no visibility into final costs incurred in manufacture at the time of providing a quote to the customer. It is quite different from topic classification technique that is based on supervised learning algorithm. via data visualization. Data Science and Analytics is widely used in the retail industry. The range of the planned capital expenditure is 9,975,000, with a minimum of 25,000 and a maximum of 10,000,000. The question that everybody wants to answer is whether wine ratings are related to its physicochemical properties. Also, various demographic factors like age, sex, education, marital status has been considered to build the model. Hence, the first step of the project is to perform store clustering considering customer behavioral attributes. Argus helps its clients maximize the value of data and analytics to allocate and align resources to strategic objectives, manage and mitigate risk (default, fraud, funding, and compliance), and optimize financial objectives. Audience size is a negative contributing factor, whereas number of emails delivered on weekday is a positive factor. Finally, we contextualize the proposed recommendations by applying them to an updated bankruptcy database. Pooja Sahare, COVID-19 Twitter Sentiment Analysis, August 2020, (Yan Yu, Dungang Liu).