Summer Sale Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 28522818

Home > Oracle > Oracle Cloud Infrastructure Certification > 1z0-1110-23

1z0-1110-23 Oracle Cloud Infrastructure Data Science 2023 Professional Question and Answers

Question # 4

You are creating an Oracle Cloud Infrastructure (OCI) Data Science job that will run on a recurring basis in a production environment. This job will pick up sensitive data from an Object Storage bucket, train a model, and save it to the model catalog. How would you design the authentication mechanism for the job?

A.

Package your personal OC file and keys in the job artifact.

B.

Use the resource principal of the job run as the signer in the job code, ensuring there is a dynamic group for this job run with appropriate access to Object Storage and the model catalog.

C.

Store your personal OCI config file and kays in the Vault, and access the Vault through the job nun resource principal

D.

Create a pre-authenticated request (PAA) for the Object Storage bucket, and use that in the job code.

Full Access
Question # 5

Which Oracle Cloud Infrastructure (OCI) service should you use to create and run Spark

applications using ADS?

A.

Data Integration

B.

Vault

C.

Data Flow

D.

Analytics Cloud

Full Access
Question # 6

You are asked to prepare data for a custom-built model that requires transcribing Spanish video recordings into a readable text format with profane words identified. Which Oracle Cloud service would you use?

A.

OCI Translation

B.

OCI Language

C.

OCI Anomaly Detection

D.

OCI Speech

Full Access
Question # 7

Six months ago, you created and deployed a model that predicts customer churn for a call

centre. Initially, it was yielding quality predictions. However, over the last two months, users are

questioning the credibility of the predictions.

Which two methods would you employ to verify the accuracy of the model?

A.

Retrain the model

B.

Validate the model using recent data

C.

Drift monitoring

D.

Redeploy the model

E.

Operational monitoring

Full Access
Question # 8

You want to use ADSTuner to tune the hyperparameters of a supported model you recently

trained. You have just started your search and want to reduce the computational cost as well as

access the quality of the model class that you are using.

What is the most appropriate search space strategy to choose?

A.

Detailed

B.

ADSTuner doesn't need a search space to tune the hyperparameters.

C.

Perfunctory

D.

Pass a dictionary that defines a search space

Full Access
Question # 9

Select two reasons why it is important to rotate encryption keys when using Oracle Cloud

Infrastructure (OCI) Vault to store credentials or other secrets.

A.

Key rotation allows you to encrypt no more than five keys at a time.

B.

Key rotation improves encryption efficiency.

C.

Periodically rotating keys make it easier to reuse keys.

D.

Key rotation reduces risk if a key is ever compromised.

E.

Periodically rotating keys limits the amount of data encrypted by one key version.

Full Access
Question # 10

You are a data scientist leveraging Oracle Cloud Infrastructure (OCI) Data Science to create a

model and need some additional Python libraries for processing genome sequencing data. Which of

the following THREE statements are correct with respect to installing additional Python libraries to

process the data?

A.

You can only install libraries using yum and pip as a normal user.

B.

You can install private or custom libraries from your own internal repositories.

C.

OCI Data Science allows root privileges in notebook sessions.

D.

You can install any open source package available on a publicly accessible Python Package

Index (PyPI) repository.

E.

You cannot install a library that's not preinstalled in the provided image

Full Access
Question # 11

You realize that your model deployment is about to reach its utilization limit. What would you do to avoid the issue before requests start to fail?

A.

Update the deployment to use fewer instances.

B.

Delete the deployment.

C.

Reduce the load balancer bandwidth limit so that fewer requests come in.

D.

Update the deployment to use a larger virtual machine (mare CPUs/memory).

E.

Update the deployment to add more instances.

Full Access
Question # 12

After you have created and opened a notebook session, you want to use the Accelerated Data

Science (ADS) SDK to access your data and get started with an exploratory data analysis.

From which two places can you access or install the ADS SDK?

A.

Oracle Autonomous Data Warehouse

B.

Oracle Machine Learning (OML)

C.

Oracle Big Data Service

D.

Conda environments in Oracle Cloud Infrastructure (OCI) Data Science

E.

Python Package Index (PyPI

Full Access
Question # 13

While reviewing your data, you discover that your data set has a class imbalance. You are aware

that the Accelerated Data Science (ADS) SDK provides multiple built-in automatic transformation

tools for data set transformation. Which would be the right tool to correct any imbalance between

the classes?

A.

visualize_transforms ()

B.

auto_transform()

C.

sample ()

D.

suggest_recommendations()

Full Access
Question # 14

You have just received a new data set from a colleague. You want to quickly find out summary

information about the data set, such as the types of features, the total number of observations, and

distributions of the data. Which Accelerated Data Science (ADS) SDK method from the ADSDataset

class would you use?

A.

show_corr()

B.

to_xgb ()

C.

compute ()

D.

show_in_notebook ()

Full Access
Question # 15

While reviewing your data, you discover that your data set has a class imbalance. You are aware that the Accelerated Data Science (ADS) SDK provides multiple built-in automatic transformation tools for data set transformation. Which would be the right tool to correct any imbalance between the classes?

A.

sample()

B.

suggeste_recoomendations()

C.

auto_transform()

D.

visualize_transforms()

Full Access
Question # 16

You train a model to predict housing prices for your city. Which two metrics from the

Accelerated Data Science (ADS) ADSEvaluator class can you use to evaluate the regression model?

A.

Explained Variance Score

B.

F-1 Score

C.

Weighted Precision

D.

Weighted Recall

E.

Mean Absolute Error

Full Access
Question # 17

Youare a data scientist working for a manufacturing company. You have developed a forecasting

model to predict the sales demand in the upcoming months. You created a model artifact that

contained custom logic requiring third party libraries. When you deployed the model, it failed to run

because you did not include all the third party dependencies in the model artifact. What file should

be modified to include the missing libraries?

A.

model_artifact_validate.py

B.

score.py

C.

requirements.txt

D.

runtime.yaml

Full Access
Question # 18

You want to ensure that all stdout and stderr from your code are automatically collected and

logged, without implementing additional logging in your code. How would you achieve this with Data

Science Jobs?

A.

On job creation, enable logging and select a log group. Then, select either a log or the option

to enable automatic log creation.

B.

Make sure that your code is using the standard logging library and then store all the logs to

Object Storage at the end of the job.

C.

Create your own log group and use a third-party logging service to capture job run details for

log collection and storing.

D.

You can implement custom logging in your code by using the Data Science Jobs logging

service.

Full Access
Question # 19

You are working as a data scientist for a healthcare company. They decide to analyze the data to

find patterns in a large volume of electronic medical records. You are asked to build a PySpark

solution to analyze these records in a JupyterLab notebook. What is the order of recommended

steps to develop a PySpark application in Oracle Cloud Infrastructure (OCI) Data Science?

A.

Launch a notebook session. Install a PySpark conda environment. Configure core-site.xml.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

B.

Install a Spark conda environment. Configure core-site.xml. Launch a notebook session.

Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your

PySpark application.

C.

Configure core-site.xml. Install a PySpark conda environment. Create a Data Flow application

with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Launch a

notebook session.

D.

Launch a notebook session. Configure core-site.xml. Install a PySpark conda environment.

E.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

Full Access
Question # 20

You have created a Data Science project in a compartment called Development and shared it

with a group of collaborators. You now need to move the project to a different compartment called

Production after completing the current development iteration.

Which statement is correct?

A.

Moving a project to a different compartment also moves its associated notebook sessions

and models to the new compartment.

B.

Moving a project to a different compartment requires deleting all its associated notebook

sessions and models first.

C.

You cannot move a project to a different compartment after it has been created.

D.

You can move a project to a different compartment without affecting its associated

notebook sessions and models

Full Access
Question # 21

You have a complex Python code project that could benefit from using Data Science Jobs as it is a

repeatable machine learning model training task. The project contains many subfolders and classes.

What is the best way to run this project as a Job?

A.

ZIP the entire code project folder and upload it as a Job artifact. Jobs automatically identifies

the_main_ top level where the code is run.

B.

Rewrite your code so that it is a single executable Python or Bash/Shell script file.

C.

ZIP the entire code project folder and upload it as a Job artifact on job creation. Jobs

identifies the main executable file automatically.

D.

ZIP the entire code project folder, upload it as a Job artifact on job creation, and set

JOB_RUN_ENTRYPOINT to point to the main executable file.

Full Access
Question # 22

Which of the following TWO non-open source JupyterLab extensions has Oracle Cloud In-frastructure (OCI) Data Science developed and added to the notebook session experience?

A.

Environment Explorer

B.

Table of Contents

C.

Command Palette

D.

Notebook Examples

E.

Terminal

Full Access
Question # 23

You have received machine learning model training code, without clear information about the

optimal shape to run the training. How would you proceed to identify the optimal compute shape

for your model training that provides a balanced cost and processing time?

A.

Start with a random compute shape and monitor the utilization metrics and time required to

finish the model training. Perform model training optimizations and performance tests in

advance to identify the right compute shape before running the model training as a job.

B.

Start with a smaller shape and monitor the Job Run metrics and time required to complete

the model training. If the compute shape is not fully utilized, tune the model parameters,

and re- run the job. Repeat the process until the shape resources are fully utilized.

C.

Start with the strongest compute shape Job's support and monitor the Job Run metrics and

time required to complete the model training. Tune the model so that it utilizes as much

compute resources as possible, even at an increased cost.

D.

Start with a smaller shape and monitor the utilization metrics and time required to complete

the model training. If the compute shape is fully utilized, change to compute that has more

resources and re-run the job. Repeat the process until the processing time does not

improve.

Full Access
Question # 24

3. When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) Data

Science model catalog, you create a score.py file. What is the purpose of the score.py file?

A.

Configure the deployment infrastructure.

B.

Execute the inference logic code.

C.

Define the compute scaling strategy.

D.

Define the inference server dependencies

Full Access