Student Hacks - Pandas
A series of lessons based on the utilization of Pandas and NumPy to read and create interesting things with files and data.
- Predictive Analysis
- 1. Intro to NumPy and the features it consists
- 2. Using NumPy to create arrays
- 3. Basic array operations
- 4. Data analysis using numpy
- Pandas
- Basics of Pandas.
- Pandas Classes
- PeriodIndex
- Dataframe Grouped By
- CSV FILES!
- Our Frontend Data Analysis Project
- Popcorn Hacks
- Main Hack
- Grading
Predictive Analysis
Predictive analysis is the use of statistical algorithms, data mining, and machine learning techniques to analyze current and historical data in order to make predictions about future events or behaviors. It involves identifying patterns and trends in data, and then using that information to forecast what is likely to happen in the future.
Predictive analysis is used in a wide range of applications, from forecasting sales and demand, to predicting customer behavior, to detecting fraudulent transactions. It involves collecting and analyzing data from a variety of sources, including historical data, customer data, financial data, and social media data, among others.
The process of predictive analysis typically involves the following steps:
- Defining the problem and identifying the relevant data sources
- Data preprocessing and cleaning to ensure that the data is accurate, complete, and consistent
- Exploring and analyzing the data to identify patterns and trends
- Selecting an appropriate model or algorithm to use for predictions
- Training and validating the model using historical data
- Using the trained model to make predictions on new data and refining the model as necessary based on the results.
- Monitoring and evaluating the performance of the model over time
Predictive analysis can help organizations make more informed decisions, improve efficiency, and gain a competitive advantage by leveraging insights from data.
It is most commonly used in marketing, where workers try to predict which products would be most popular and try to advertise those products as much as possible, and also healthcare, where algorithms analyze patterns and reveal prerequisites for diseases and suggest preventive treatment, predict the results of various treatments and choose the best option for each patient individually, and predict disease outbreaks and epidemics.
1. Intro to NumPy and the features it consists
Numpy, by definition, is the fundamental package for scientific computing in Python which can be used to perform mathematical operations, provide multidimensional array structures, and makes data analysis much easier. Numpy is very important and useful when it comes to data analysis, as it can easily use its features to complete and perform any mathematical operation, as well as analyze data files.
If you don't already have numpy installed, you can do so using conda install numpy
or pip install numpy
Once that is complete, to import numpy in your code, all you must do is:
import numpy as np
2. Using NumPy to create arrays
An array is the central data structure of the NumPy library. They are used as containers which are able to store more than one item at the same time. Using the function np.array
is used to create an array, in which you can create multidimensional arrays.
Shown below is how to create a 1D array:
a = np.array([1, 2, 3])
print(a)
# this creates a 1D array
How could you create a 3D array based on knowing how to make a 1D array?
# Create a 1D array
arr1d = np.array([1, 2, 3, 4, 5, 6])
# Reshape the 1D array into a 3D array
arr3d = arr1d.reshape((2, 1, 3))
print(arr3d)
Arrays can be printed in different ways, especially a more readable format. As we have seen, arrays are printed in rows and columns, but we can change that by using the reshape
function
c = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(c.reshape(1, 9)) # organizes it all in a single line of output
In the code segment below, we can also specially select certain rows and columns from the array to further analyze selective data.
print(c[1:, :2])
# the 1: means "start at row 1 and select all the remaining rows"
# the :2 means "select the first two columns"
3. Basic array operations
One of the most basic operations that can be performed on arrays is arithmetic operations. With numpy, it is very easy to perform arithmetic operations on arrays. You can add, subtract, multiply and divide arrays, just like you would with regular numbers. When performing these operations, numpy applies the operation element-wise, meaning that it performs the operation on each element in the array separately. This makes it easy to perform operations on large amounts of data quickly and efficiently.
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
print(a + b) # adds each value based on the column the integer is in
print(a - b) # subtracts each value based on the column the integer is in
print(a * b) # multiplies each value based on the column the integer is in
print(a / b) # divides each value based on the column the integer is in
d = np.exp(b)
e = np.sqrt(b)
print(d)
print(e)
From the knowledge of how to use more advanced mathematical expressions than the basic 4 mathematical operations such as exponent and square root, now can you code how to calculate the 3 main trig expressions (sin, cos, tan), natural log, and log10 of a 1D array.
import numpy as np
# create a 1D array
arr = np.array([0, 30, 45, 60, 90])
# calculate sin
sin_arr = np.sin(np.deg2rad(arr))
print(sin_arr)
# calculate cos
cos_arr = np.cos(np.deg2rad(arr))
print(cos_arr)
# calculate tan
tan_arr = np.tan(np.deg2rad(arr))
print(tan_arr)
# calculate natural log
log_arr = np.log(arr)
print(log_arr)
# calculate log10
log10_arr = np.log10(arr)
print(log10_arr)
4. Data analysis using numpy
Numpy provides a convenient and powerful way to perform data analysis tasks on arrays. One of the most common tasks in data analysis is finding the mean, median, and mode of a dataset. Numpy provides functions to perform these operations quickly and easily. The mean function calculates the average value of the data, while the median function calculates the middle value in the data. The standard deviation function calculates how spread out the data is from the mean. Additionally, numpy provides functions to find the minimum and maximum values in the data. These functions are very useful for gaining insight into the properties of large datasets and can be used for a wide range of data analysis tasks.
data = np.array([2, 5, 12, 13, 19])
print(np.mean(data)) # finds the mean of the dataset
print(np.median(data)) # finds the median of the dataset
print(np.std(data)) # finds the standard deviation of the dataset
print(np.min(data)) # finds the min of the dataset
print(np.max(data)) # finds the max of the dataset
Now from learning this, can you find a different way from how we can solve the sum or products of a dataset other than how we learned before?
import numpy as np
data = np.array([2, 4, 6, 8, 10])
# calculate the product of the elements in the array
product = np.prod(data)
print("Product of elements:", product)
# calculate the sum of the elements in the array
total = np.sum(data)
print("Sum of elements:", total)
Numpy also has the ability to handle CSV files, which are commonly used to store and exchange large datasets. By importing CSV files into numpy arrays, we can easily perform complex operations and analysis on the data, making numpy an essential tool for data scientists and researchers.
genfromtxt
and loadtxt
are two functions in the numpy library that can be used to read data from text files, including CSV files.
genfromtxt
is a more advanced function that can be used to read text files that have more complex structures, including CSV files. genfromtxt
can handle files that have missing or invalid data, or files that have columns of different data types. It can also be used to skip header lines or to read only specific columns from the file.
import numpy as np
padres = np.genfromtxt('files/padres.csv', delimiter=',', dtype=str, encoding='utf-8')
# delimiter indicates that the data is separated into columns which is distinguished by commas
# genfromtxt is used to read the csv file itself
# dtype is used to have numpy automatically detect the data type in the csv file
print(padres)
loadtxt
is a simpler function that can be used to read simple text files that have a regular structure, such as files that have only one type of data (such as all integers or all floats). loadtxt
can be faster than genfromtxt
because it assumes that the data in the file is well-structured and can be easily parsed.
import numpy as np
padres = np.loadtxt('files/padres.csv', delimiter=',', dtype=str, encoding='utf-8')
print(padres)
for i in padres:
print(",".join(i))
Pandas
What is Pandas
Pandas is a Python library used for working with data sets. A python library is something It has functions for analyzing, cleaning, exploring, and manipulating data.
Why Use Pandas?
Pandas allows us to analyze big data and make conclusions based on statistical theories. Pandas can clean messy data sets, and make them readable and relevant. Also it is a part of data analysis, and data manipulation.
What Can Pandas Do?
Pandas gives you answers about the data. Like:
- Is there a correlation between two or more columns?
- What is average value
- Max value
- Min value
- How to load data
- Delete data
- Sort Data.
Pandas are also able to delete rows that are not relevant, or contains wrong values, like empty or NULL values. This is called cleaning the data.
import pandas as pd
# What this does is it calls the python pandas library and this code segment is needed whenever incorporating pandas.
import pandas as pd
data1 = {
'teams': ["BARCA", "REAL", "ATLETICO"],
'standings': [1, 2, 3]
}
myvar = pd.DataFrame(data1)
print(myvar)
import pandas as pd
score = [5/5, 5/5, 1/5]
myvar = pd.Series(score, index = ["math", "science", "pe"])
print(myvar)
import pandas as pd
time = pd.period_range('2022-01', '2022-12', freq='M')
print(time)
Now implement a way to show a period index from June 2022 to July 2023 in days.
import pandas as pd
# Create a period index starting from June 2022 and ending in June 2023
date_range = pd.period_range(start='2022-06-01', end='2023-06-30', freq='D')
# Print the period index
print(date_range)
Dataframe Grouped By
- This allows for you to organize your data and calculate the different functions such as
- count(): returns the number of non-null values in each group.
- sum(): returns the sum of values in each group.
- mean(): returns the mean of values in each group.
- min(): returns the minimum value in each group.
- max(): returns the maximum value in each group.
- median(): returns the median of values in each group.
- var(): returns the variance of values in each group.
- agg(): applies one or more functions to each group and returns a new DataFrame with the results.
import pandas as pd
data = {
'Category': ['E', 'F', 'E', 'F', 'E', 'F', 'E', 'F'],
'Value': [100, 250, 156, 255, 240, 303, 253, 3014]
}
df = pd.DataFrame(data)
grouped = df.groupby('Category')#GUESS WHAT THIS WOULD BE IF WE WERE LOOKING FOR COMBINED TOTALS!(If we were looking for combined totals, we would use the sum() function on the grouped object. This would sum up the values in the 'Value' column for each category and give us the total for each category. )
print(grouped)
import pandas as pd
colors = pd.Categorical(['yellow', 'orange', 'blue', 'yellow', 'orange'], categories=['yellow', 'orange', 'blue'])
print(colors)
import pandas as pd
timing = pd.Timestamp('2023-02-05 02:00:00')
print(timing)
- Name, Position, Average, HR, RBI, OPS, JerseyNumber
- Manny Machado, 3B, .298, 32, 102, .897, 13
- Tatis Jr, RF, .281, 42, 97, .975, 23
- Juan Soto, LF, .242, 27, 62, .853, 22
- Xanger Bogaerts, SS, .307, 15, 73, .833, 2
- Nelson Cruz, DH, .234, 10, 64, .651, 32
- Matt Carpenter, DH, .305, 15, 37, 1.138, 14
- Cronezone, 1B, .239, 17, 88, .722, 9
- Ha-Seong Kim, 2B, .251, 11, 59, .708, 7
- Trent Grisham, CF, .184, 17, 53, .626, 1
- Luis Campusano, C, .250, 1, 5, .593, 12
- Austin Nola, C, .251, 4, 40, .649, 26
- Jose Azocar, OF, .257, 0, 10, .630, 28
QUESTION: WHAT DO YOU GUYS THINK THE INDEX FOR THIS WOULD BE?
The index for this data could be the player names or their jersey numbers, depending on the purpose of the analysis. If the analysis is focused on individual players, using their names as the index would be more appropriate. If the analysis is focused on team statistics, using the jersey numbers as the index could be more useful.
Can you explain what is going on in this code segment below. (hint: define what ascending= false means, and df. head means)
CSV file named 'padres.csv' is read using the pd.read_csv method and then sorted by the 'Name' column in descending order (largest to smallest) using the sort_values method. df.head(10) displays the first 10 rows of the sorted dataframe, representing the top 10 based on the 'Name' column. df.tail(10) displays the last 10 rows of the sorted dataframe, representing the bottom 10 based on the 'Name' column. In the last line, ', '.join(df.tail(10))) concatenates the bottom 10 rows of the dataframe as a string separated by commas and displays it. The ascending=False parameter in the sort_values method sorts the dataframe in descending order. If it is set to True, it sorts the dataframe in ascending order. df.head is a method that returns the first n rows of a dataframe, where n is the number specified as a parameter. In this case, it is 10.
import pandas as pd
#read csv and sort 'Duration' largest to smallest
df = pd.read_csv('files/padres.csv').sort_values(by=['Name'], ascending=False)
print("--Duration Top 10---------")
print(df.head(10))
print("--Duration Bottom 10------")
print(df.tail(10))
print(', '.join(df.tail(10)))
import pandas as pd
df = pd.read_csv("./files/housing.csv")
mode_total_rooms = df['total_rooms'].mode()
print(f"The mode of the 'total_rooms' column is: {mode_total_rooms}")
import pandas as pd
df = pd.read_csv("./files/housing.csv")
grouped_df = df.groupby('total_rooms')
agg_df = grouped_df.agg({'total_rooms': 'sum', 'population': 'mean', 'longitude': 'count'})
# WHAT DO YOU GUYS THINK df.agg means in context of pandas and what does it stand for.
print(agg_df)
Popcorn Hacks
- Complete fill in the blanks for Predictive Analysis Numpy COMPLETED
- Takes notes on Panda where it asks you to COMPLETED
- Complete code segment tasks in Panda and Numpy COMPLETED
Main Hack
- Make a data file - content is up to you, just make sure there are integer values - and print
- Run Panda and Numpy commands
- Panda:
- Find Min and Max values
- Sort in order - can be order of least to greatest or vice versa
- Create a smaller dataframe and merge it with your data file
- Panda:
import pandas as pd
import numpy as np
# read JSON file
data = pd.read_json('files/grade.json')
# find min and max values
min_gpa = data['GPA'].min()
max_gpa = data['GPA'].max()
print("Min GPA:", min_gpa)
print("Max GPA:", max_gpa)
# sort by GPA from least to greatest
data = data.sort_values('GPA')
# create a smaller dataframe with only 'Student ID' and 'GPA' columns
df = data[['Student ID', 'GPA']]
# merge with original dataframe on 'Student ID'
merged = pd.merge(df, data, on='Student ID')
print(merged)
- Numpy:
- Random number generation
- create a multi-dimensional array (multiple elements)
- create an array with linearly spaced intervals between values
import numpy as np
# Random number generation
random_numbers = np.random.rand(5) # generate an array of 5 random numbers between 0 and 1
print(random_numbers)
# create a multi-dimensional array
multi_dim_array = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(multi_dim_array)
# create an array with linearly spaced intervals between values
linear_array = np.linspace(0, 10, 5) # generate an array with 5 values equally spaced between 0 and 10
print(linear_array)
import numpy as np
# generate a 3x3 array of random integers between 0 and 10
rand_array = np.random.randint(low=0, high=10, size=(3, 3))
print(rand_array)
import numpy as np
# create a 2x2x2 array with elements ranging from 1 to 8
multi_array = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(multi_array)
import numpy as np
# create an array with 10 evenly spaced values between 1 and 5
lin_array = np.linspace(start=1, stop=5, num=10)
print(lin_array)