How to Detect Attacks on AI ML Models: Adversarial Robustness Toolbox

preview_player
Показать описание
-

#datascience #machinelearning #deeplearning #datanalytics #predictiveanalytics #artificialintelligence #generativeai #largelanguagemodels #computervision #naturallanguageprocessing #agents #transformers #embedding #graphml #graphdatascience #datavisualization #businessintelligence #optimization #montecarlosimulation #simulation #LLMs #python #aws #azure #gcp
Комментарии
Автор

If you found this content useful, pleases consider sharing it with others who might benefit. Your support is greatly appreciated :)

SridharKumarKannam
Автор

didnt wrote the other code you explained and define in min 9 when robust classifier is defined, im running on colab

Username
Автор

i followed you until min 9, then you change the screen, and just wrote

# Setting the number of rows and columns for the figure
nrows, ncols = 2, 5

# Generating subplots
fig, axes = plt.subplots(
nrows=nrows,
ncols=ncols,
figsize=(20, 10)
)

# Defining a range of eps values to try
eps_to_try = [0.01, 0.025, 0.05, 0.075, 0.1, 0.125, 0.15, 0.175, 0.2, 0.25]

# Defining a counting variable to traverse eps_to_try
counter = 0

# Iterating over rows and cols
for i in range(nrows):
for j in range(ncols):
# Creating an attack object for the current value of eps
attack_fgsm = FastGradientMethod(
estimator=classifier,
eps=eps_to_try[counter]
)

# Generating adversarial images
test_images_adv =

# Showing the first adversarial image
axes[i,

# Disabling x and y ticks
axes[i, j].set_xticks(ticks=[])
axes[i, j].set_yticks(ticks=[])

# Evaluating model performance on adversarial samples and retrieving test accuracy
test_score = classifier._model.evaluate(
x=test_images_adv,
y=test_labels
)[1]

# Getting prediction for the image that we displayed
prediction = np.argmax(model.predict(
x=np.expand_dims(a=test_images_adv[0], axis=0)
))

# Showing the current eps value, test accuracy, and prediction
axes[i, j].set_title(
label=f"Eps value: {eps_to_try[counter]}\n"
f"Test accuracy: {test_score * 100:.2f}%\n"
f"Prediction: {prediction}"
)

# Incrementing the counter
counter += 1

and here i got an error:
step 4
# Evaluate the robust classifier

# Evaluate the robust classifier's performance on the original test data:
x_test_robust_pred = np.argmax(robust_classifier.predict(x_test), axis=1)
nb_correct_robust_pred = np.sum(x_test_robust_pred == np.argmax(y_test, axis=1))

print("Original test data:")
print("Correctly classified:
print("Incorrectly classified: {}".format(len(x_test) - nb_correct_robust_pred))

robust_classifier is not defined, how to solve it? did i miss something?

Username