Sql Loader: Uploading Files in Oracle Database
Dear readers of our blog, we'd like to recommend you to visit the main page of our website, where you lot can learn almost our product SQLS*Plus and its advantages.
SQLS*Plus - all-time SQL Server control line reporting and automation tool! SQLS*Plus is several orders of magnitude improve than SQL Server sqlcmd and osql control line tools.
Enteros UpBeat offers a patented database functioning management SaaS platform. Information technology proactively identifies root causes of circuitous revenue-impacting database functioning problems across a growing number of RDBMS, NoSQL, and deep/machine learning database platforms. We back up Oracle, SQL Server, IBM DB2, MongoDB, Casandra, MySQL, Amazon Aurora, and other database systems.
2 September 2020
In this article, we will talk most how to upload information to an Oracle database, namely, we will consider a utility to import data into an Oracle database called SQL*Loader.
Downloading and converting data
Ane of the near typical tasks of an Oracle database ambassador (like any other) is to download data from external sources. Although information technology is normally performed during the initial filling of the database, it is often necessary to load data into unlike tables throughout the existence of the production database.
Previously, database administrators used to perform data loading from flat files to Oracle database tables only utility SQL*Loader.
Today, although the SQL*Loader utility withal remains an important tool for loading data into Oracle databases, Oracle also offers some other way to load tables, namely – recommends using the machinery of external tables.
External tables use the functionality of SQL*Loader and let you to perform complex transformations on them before loading data into the database. With their help, data can not only be loaded into the database but besides uploaded to external files and then from these files to other Oracle databases.
In many cases, especially in data warehouses, the uploaded data needs to be converted. Oracle offers several tools for performing information conversion inside a database, including SQL and PL/SQL technologies. For example, the powerful MODEL design allows you to create complex multidimensional arrays and perform circuitous calculations between strings and arrays using unproblematic SQL code.
In addition, Oracle offers a useful information replication mechanism called Oracle Streams, which allows you to transfer changes from one database to another. This mechanism can exist used for various purposes, including the maintenance of a backup database.
This chapter deals with all these issues related to the loading and conversion of data. But beginning, of grade, it provides a cursory overview of what the extraction, conversion, and loading process are all about.
A cursory overview of the process of extracting, transforming, and loading data
Before launching any awarding in relation to Oracle Database, it must first be filled with data. 1 of the most typical information sources for database filling is a set of flat files from legacy systems or other sources.
Previously, the use of a standard or direct method of loading data using SQL*Plus was the only possible way to execute the loading of these data from external files to database tables. From the technical point of view, SQL*Loader even so remains the master utility supplied by Oracle to perform data uploading from external files, but if you desire you can besides utilize the mechanism of external tables, which through the utilize of the SQL*Loader tool helps to access data contained in external data files.
Because the source data may contain excessive information or information in a format other than that required by the awarding, it is frequently necessary to convert it in some way earlier the database can employ it. Conversion of data is a particularly common requirement for data warehouses that require data to be extracted from multiple sources. It is possible to perform a preliminary or basic conversion of source data when running the SQL*Loader utility itself.
Still, circuitous data conversion requires separate steps, and in that location are several methods to manage this process. In most repositories, data goes through three principal stages before it can be analyzed: extraction, transformation, and loading, all called the ETL process. Below is a description of what each of the steps represents.
- Extraction is the process of identifying and extracting source information, possibly in different formats, from multiple sources, not all of which may be relational databases.
- Information conversion is the virtually circuitous and time-consuming of these three processes and may involve applying circuitous rules to the information, as well equally performing operations such equally aggregation of information and applying different functions to them.
- Downloading is the process of data placement in database tables. It may also imply the execution of index maintenance and table-level restrictions.
Previously, organizations used two different methods to perform the ETL process: the method to perform a conversion and then download, and the method to perform the download and then conversion. The first method involves cleaning or transforming data before they are loaded into Oracle tables.
To perform this conversion, we usually use individually adult ETL processes. As for the 2d method, in about cases it does not take full reward of Oracle's built-in conversion capabilities; instead, information technology involves first loading information into intermediate tables and moving them to the final tables only after they have been converted inside the database itself.
Intermediate tables play a central role in this method. Its disadvantage is that tables have to back up several types of data, some of which are in their original land, while others are already in the ready state.
Today, Oracle Database 11g offers just stunning ETL capabilities, which allow you to load information into the database in a new way, namely, the method of performing their conversion during loading. By using the Oracle Database to perform all stages of ETL, all usually time-consuming ETL processes can exist performed fairly hands. Oracle provides a whole set of auxiliary tools and technologies that reduce the time it takes to load data into the database, while also simplifying all related work. In particular, the ETL solution offered by Oracle includes the following components.
- External tables. External tables provide a way to combine loading and conversion processes. Their use will allow not to use cumbersome and time-consuming intermediate tables during data loading. They will be described in more detail later in this chapter, in the department "Using external tables to load data".
- Multitable inserts. The multitable inserts machinery allows you to insert information not into one tabular array just into several tables at once using different criteria for different tables. Information technology eliminates the need to perform an actress step such as dividing data into split up groups before loading them. It will exist discussed in more than item later in this chapter, in the section "Using multiple table inserts".
- Inserts and updates (upserts). This is a fabricated-up name for a engineering science that allows yous to either insert data into a table or only update rows with a single MERGE SQL argument. The MERGE operator volition either insert new data or update rows if such data already exist in the tabular array. It tin can make the loading process very easy considering it eliminates the demand to worry almost whether the tabular array already contains such data. It will be explained in more particular later in this chapter, nether "Using the MERGE Operator".
- Table functions. Tabular functions generate a set of rows every bit output. They return an instance of the collection type (i.due east. a nested tabular array or one of the VARRAY data types). They are like to views, but instead of defining the conversion process in a declarative style in SQL, they mean its definition in a procedural fashion in PL/SQL. Tabular functions are very helpful when performing large and circuitous conversions, as they permit you to perform them before loading data into the data warehouse. They will be discussed in more detail later in this chapter, in the department "Using tabular array functions to convert data".
- Transportable tablespaces. These tablespaces provide an efficient and fast way to move information from one database to some other. For example, they tin exist used to hands and quickly migrate data betwixt an OLTP database and the information warehouse. Nosotros will talk more about them in new blog manufactures.
Just for the record! You can also use the Oracle Warehouse Builder (Oracle Data Storage Builder) tool or simply OWB to download data efficiently. This tool is a wizard-controlled tool for loading data into a database via SQL*Loader.
Information technology allows loading data both from Oracle databases and from flat files. Information technology also allows you to excerpt data from other databases such as Sybase, Informix, and Microsoft SQL Server through the Oracle Transparent Gateways mechanism. It combines the functions for ETL and blueprint in a very convenient format.
In the next section, you will learn how you can employ the SQL*Loader utility to download data from external files. It will as well aid you to empathize how to use external tables to perform information uploading. After describing the mechanism of external tables, you lot will learn about the various methods offered in Oracle Database 11g for data conversion.
Using SQL*Loader utility
The SQL*Loader utility, which comes with the Oracle database server, is often used by database administrators to load external data into Oracle databases. Information technology is an extremely powerful tool and is able to perform not only data uploading from text files. Beneath is a brief list of its other features.
- Information technology allows you to convert data before or right during the upload itself (however, in a limited way).
- It allows downloading information from several types of sources: from disks, tapes, and named channels, besides as using multiple information files in the same download session.
- It allows downloading data over the network.
- Information technology allows you to selectively upload data from the input file based on different atmospheric condition.
- Information technology allows loading both the whole table and only a sure office of it, as well equally to load data into several tables simultaneously.
- It allows you to perform simultaneous operations on data loading.
- It allows you to automate the procedure of loading and so that it would be carried out at a scheduled time.
- It allows you to load complex object-relational data.
SQL*Loader utility can be used to perform data loading in several modes.
- In conventional data loading mode. In this mode, SQL*Loader reads several lines at a fourth dimension and saves them in a bound array, and and then inserts the entire assortment into the database at one time and fixes the operation.
- In directly-path loading manner. In this style, no INSERT SQL statement is used to load data into Oracle tables. Instead, column array structures are created from the data to be loaded, which are then used to format Oracle blocks of data, after which they are written direct to the database tables.
- In the external data loading mode. The new proposed Oracle mechanism of external tables is based on the employ of SQL*Loader functionality and allows you to admission the data contained in external files as if they were part of the database tables. When using the ORACLE_LOADER access commuter to create an external table, in fact, the functionality of SQL*Loader is used. Oracle Database 11g still offers a new access driver ORACLE_DATAPUMP, which allows you to write to external tables.
The first 2 boot modes (or methods) have both their advantages and disadvantages. Due to the fact that the direct boot way bypasses Oracle'due south proposed SQL mechanism, it is much faster than the normal boot fashion.
However, in terms of opportunities for data conversion, the normal mode is much superior to the direct fashion, because it allows y'all to apply to the columns of the table during data loading a number of different functions. Therefore, Oracle recommends the use of the normal method of loading for loading small amounts of data, and direct – for loading large amounts of data.
Direct loading mode will be considered in more item immediately after learning the basic functionality of SQL*Loader and means to apply the usual method of loading. Every bit for the external table mode, it volition exist discussed in more detail later in this affiliate, in the section "Using external tables to load information".
The procedure of loading data using the SQL*Loader utility includes two chief steps.
- Selecting a data file that contains the information to exist downloaded. Such a file usually has the extension .dat and contains the necessary data. These information tin be in several formats.
- Create a control file. The control file specifies SQL*Loader how the information fields should be placed in the Oracle table and whether the data should be converted in whatsoever way. Such a file usually has the .ctl extension.
The control file will provide a scheme for displaying the columns of the tabular array on the data fields in the input file. The availability of a split up data file to perform the upload is not a mandatory requirement at all. If desired, the data can exist included in the control file itself, after specifying the data, apropos the loading process control, like the list of fields, etc.
These data can be provided either in the class of fields of stock-still length or in a gratis format with the use of a special delimiter symbol, such every bit a comma (,) or a conveyor (|). Since the control file is so of import, permit us first with it.
Studying the SQL*Loader command file
The SQL*Loader control file is a simple text file that specifies important details of the loaded job, such equally the location of the original information fall, besides as a scheme for displaying data in this source file on the columns in the target table.
It may also specify any conversion operations to be performed during the upload procedure, likewise as the names of the log files to be used for uploading, and the names of the files to be used for intercepting wrong and rejected information. In general, the SQL*Loader control file provides instructions on the following aspects:
- the source of the data to exist uploaded to the database;
- the specification of the columns in the target tabular array;
- the features of the formatting used in the input file;
- displaying the input file fields in the target table columns;
- information conversion rules (applying SQL functions);
- location of log files and error files.
Listing 13.1 gives an case of a typical SQL*Loader control file. The SQL*Loader utility considers the data rows independent in the source files to be recorded, and therefore the control file may withal specify the format of the records.
Please annotation that a separate file for data is also allowed. In this example, however, the information to be loaded follows the control data directly, by using the INFILE * specification in the control file. This specification indicates that the data to be loaded will follow the command information.
At the functioning of a unmarried operation on the loading of the information, probably, it is ameliorate to deed as like shooting fish in a barrel as possible and identify the information in the control file itself. The BEGINDATA keyword shows SQL*Loader, where the office of the control file containing the information begins.
LOAD Data
INFILE *
BADFILE test.bad
DISCARDFILE examination.dsc
INSERT
INTO Tabular array tablename
FIELDS TERMINATED By ',' OPTIONALLY ENCLOSED BY'"
(column1 POSITION (one:ii) CHAR,
column2 POSITION (three:nine) INTEGER EXTERNAL,
column3 POSITION (10:15) INTEGER EXTERNAL,
column4 POSITION (sixteen:16) CHAR
)
BEGINDATA
AY3456789111111Y
/* Other data . . .*/
The part of the control file that describes the information fields is called the field list. In the control file, shown in Listing 13.ane, this field list looks like this:
(column1 POSITION (one:2) char,
column2 POSITION (3:nine) integer external,
column3 POSITION (10:xv) integer external,
column4 POSITION (16:16) char
)
Hither you can come across that the list of fields lists the names of fields, their position, data type, separators, and any permissible conditions.
In the control file, y'all can specify many variables, which, roughly speaking, are divided into the following groups:
- constructions related to loading;
- constructions related to information files;
- constructions apropos the display of tables and fields;
- command-line parameters defined in the control file.
The post-obit subsections draw in more detail all of these unlike types of parameters that can be set in the command file to configure data loading processes.
Tip. If you are not certain which parameters to use for an SQL*Loader session, y'all can but enter them at the command line of the sqlldr operating system to come across all available options. Running this command volition display a list of all possible parameters and default values that are accepted for them in a item operating system (if whatever).
LOAD DATA keywords are at the very beginning of the control file and simply mean that you need to load data from the input file into Oracle tables using SQL*Loader.
The INTO Tabular array blueprint indicates which table the information needs to be loaded into. If you demand to load data into several tables at once, you demand to use i INTO TABLE structure for each tabular array. Keywords INSERT, Supervene upon, and Suspend betoken to the database how the data should be loaded.
If you employ an INSERT construct, the table must be empty, otherwise, the loading process volition generate an error or stop. The REPLACE blueprint tells Oracle to truncate the table and commencement downloading new data. When performing a boot performance using the REPLACE parameter, it volition ofttimes appear as if the table hangs first. In fact, at this fourth dimension Oracle truncates the table before starting the download procedure. As for the APPEND blueprint, it tells Oracle to add new rows to existing table data.
Constructions apropos data files
There are several designs that can be used to specify the location and other characteristics of a file or data files from which information is to be loaded via SQL*Loader. The following subsections describe some of the about important of these constructions.
Specification of the information file
The name and location of the input file is specified using the INFILE parameter:
INFILE='/a01/app/oracle/oradata/load/consumer.dat'
If you exercise not want to employ the INFILE specification, yous can include the information in the control file itself. If the data are included in the control file, instead of using a separate input file, the identify of the file location is omitted and but the symbol * is specified in its place:
INFILE *
If you choose to include data into the control file itself, you must utilize the BEGINDATA construction before starting the information:
BEGINDATA
Nicholas Alapati,243 New Highway,Irving,TX,75078
. . .
Physical and logical records
Each physical record in the source file is equivalent to a logical one by default, but if desired you can also specify in the control file that one logical record should include several physical records at in one case. For example, in the next input file, all three physical records shall as well be considered three logical records by default:
Nicholas Alapati,243 New Highway,Irving,TX,75078
Shannon Wilson,1234 Elm Street,Fort Worth,TX,98765
Nina Alapati,2629 Skinner Bulldoze,Flower Mound,TX,75028
To convert these 3 physical records, you can utilise either the CONCATENATE structure or the CONTINUEIF construction in the command file.
If the input data are in a fixed format, you can specify the number of lines to be read to form each logical record as follows:
CONCATENATE 4
Specifically, this CONCATENATE design determines that ane logical record should be obtained by combining four lines of information. If each data line consists of 80 characters, then information technology turns out that the new logical record to be created will contain 320 characters. For this reason, when using the CONCATENATE structure, the record length construction (RECLEN) should also be defined together with it. In this case, this construction should await like this:
RECLEN 320
Every bit for the CONTINUEIF pattern, information technology allows combining physical records into logical ones by specifying one or more characters in a particular place. For case:
CONTINUEIF THIS (1:4) = 'next'
In this example, the CONTINUEIF design indicates that if iv characters next are found at the beginning of a line, SQL*Loader should accept all subsequent data as an extension of the previous line (iv characters and the word next were called arbitrarily: any characters may act as continuation pointers).
In case of using fixed format data, the CONTINUEIF character can exist placed in the last column as shown in the following example:
CONTINUEIF Final = '&'
Here, the CONTINUEIF pattern determines that if an ampersand (&) character is found at the end of a line, the SQL*Loader utility should take the side by side line as an extension of the previous ane.
Please notation! Using both CONTINUEIF and CONCATENATE constructs will dull down SQL*Loader, so it is nonetheless meliorate to display physical and logical records in a "one to one" scheme. This is because when combining multiple concrete records to form a single logical SQL * Loader requires additional scanning of input data, which takes more time.
Record Format
You can apply one of the following three formats to your recordings.
- The stream format. This format is the most common and involves the use of a special catastrophe symbol to indicate the end of the recording. When scanning an input file, the SQL*Loader utility knows that it has reached the terminate of a record when information technology comes across such a completion grapheme. If no end character is specified, the default finish grapheme is either the newline character or the newline character (which in Windows must also be preceded past the carriage render character). In a set of three entries, which was given in the previous case, this is the format used.
- Variable format. This format implies an explicit indication of its length at the first of each record, equally shown in the following instance:
INFILE 'example1.dat' "var two"
06sammyy12johnson,1234
This line contains two entries: the get-go six characters long (Sammy) and the second twelve characters long (johnson,1234). The var ii design indicates to SQL*Loader that all information records are of variable size and that each new record is preceded by a ii character long field.
- Fixed format. This format involves setting a specific fixed size for all records. Below is an instance where it is specified that each record is 12 bytes long:
INFILE 'example1.dat' "fix 12".
sammyy,1234,johnso,1234
In this example, at first glance, it seems that the tape includes the whole line (sammyy,1234, johnso,1234), but the design of fix 12 indicates that in fact, this line contains ii whole records of 12 characters. Therefore, it turns out that if a stock-still format is used in the source data file, information technology is allowed to have several entries on each line.
Descriptions apropos table and field comparing
During the loading session, SQL*Loader takes information fields from data records and converts them into table columns. It is in this process that the designs for mapping tables and fields help. With their help, the control file provides details almost the fields, including column names, position, types of data contained in the input records, separators, and conversion parameters.
The column name in the tabular array
Each column in the tabular array is conspicuously defined with the position and type of data associated with the value of the respective field in the input file. It is not necessary to load all columns in the table with the values. If you lot skip whatsoever columns in the command file, the Nothing value is automatically ready for them.
Position
The SQL*Loader utility needs to somehow find out where in the input file at that place are dissimilar fields. The fields are called individual elements in the data file, and at that place is no direct match between these fields and the columns in the table where the information is loaded. The procedure of displaying the fields in the input information file on the columns of the table in the database is called field setting and takes the about time from the CPU during loading. The verbal position of the diverse fields in the data record allows you to set the POSITION design. The position specified in this design can exist relative or accented.
By specifying a relative position, we mean the field position relative to the position of the previous field, as shown in the post-obit case:
POSITION(*) NUMBER EXTERNAL vi
employee_name POSITION(*) CHAR 30
In this example, the POSITION design instructs SQL*Loader to first load the start employee_id field then keep to load the employee_name field, which starts at position 7 and is 30 characters long.
By specifying an accented position is meant simply the position where each field begins and ends:
employee_id POSITION(1:6) INTEGER EXTERNAL
employee_name POSITION(7:36) CHAR
Data Types
The types of data used in the management file concern only the input records and those associated with the columns in the database tables practise non match. The four chief data types that can exist used in the SQL*Loader control file are listed below:
INTEGER(northward) is a binary integer where n can be 1, 2, 4 or viii.
SMALLINT
CHAR
INTEGER EXTERNAL
Float EXTERNAL
DECIMAL EXTERNAL
Separators
Later on specifying the data types, you lot can specify a delimiter, which should be used to separate fields. It can be divers using either the TERMINATED BY construction or the ENCLOSED By construction.
The TERMINATED By construction restricts the field to the specified symbol and indicates the cease of the field. Examples are given below:
Past WHITESPACE
Past ","
In the first example, the design of TERMINATED Past indicates that the end of the field should be the first character to meet the space character, and in the second instance, that a comma should be used to separate the fields.
The design of ENCLOSED BY " " indicates that a pair of double quotes should human activity as the field separating grapheme. Below is an example of how to use this structure:
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"
Quango. In Oracle, information technology is recommended to avoid using separators and to specify the field positions wherever possible (using the POSITION parameter). Specifying field positions saves the database the demand to scan the information file and find the selected delimiters at that place, thus reducing the time spent on its processing.
Data conversion parameters
If you wish, you lot can make sure that earlier loading field data into table columns, SQL-functions are practical to them. Generally, merely SQL functions that render unmarried values can be used to convert field values. A field inside an SQL string needs to be referenced using the field name_taxis.
The SQL function or functions themselves must be specified after the information type associated with the field and enclosed in double-quotes, as shown in the following examples:
field_name CHAR TERMINATED Past "," "SUBSTR(:field_name, 1, 10)".
employee_name POSITION 32-62 CHAR "UPPER(:ename)"
bacon position 75 CHAR "TO_NUMBER(:sal, '$99,999.99')"
commission INTEGER EXTERNAL "":commission * 100"
Every bit it is not difficult to notice, the application of SQL-operations and functions to the values of fields before they are loaded into tables helps to convert data right during their loading.
Command-line parameters specified in the control file
The SQL*Loader utility allows you lot to specify the number of runtime parameters in the command line when calling its executable file. Parameters, the values of which should remain the same for all jobs, are usually prepare in a separate parameter file, allowing you to further employ the command line simply to run SQL*Loader jobs, both interactively and as scheduled batch jobs.
The command line specifies just those parameters that are specific to the execution time, along with the proper name and location of the control file.
Alternatively, the runtime parameters can also exist specified inside the control file itself using the OPTIONS design. Yes, the runtime parameters can e'er exist specified when calling SQL*Loader, but if they are frequently repeated, information technology is even so better to specify them in the command file using the OPTIONS design.
Using the OPTIONS construct is especially convenient when the length of information to be specified in the SQL*Loader command line is so large that it exceeds the maximum control-line size accepted in the operating system.
For your information! Setting a parameter on the command line will override the values that were specified for it in the command file.
The following subsections describe some of the well-nigh important parameters that tin can exist set up in the control file past the OPTIONS structure.
Parameter USERID
The USERID parameter enables you lot to specify the name and password of the user in the database who has the privileges required for downloading data:
USERID = samalapati/sammyy1
CONTROL parameter
The Command parameter allows y'all to specify the name of the control file to be used for the SQL*Loader session. This control file may include the specifications of all loading parameters. Of course, information technology is also possible to load data using manually entered commands, but the use of the control file provides more than flexibility and allows you to automate the loading process.
CONTROL = '/test01/app/oracle/oradata/load/finance.ctl'
DATA parameter
The Information parameter allows you to specify the name of the input data file, which should exist used to load data. By default, the name of such a file always ends with the .dat extension. Please note that the data to be loaded demand not be within a separate data file. If desired, they may also be included in the command file, correct after the details concerning the loading process itself.
Data = '/test02/appacle/oradata/load/finance.dat'
BINDSIZE and ROWS parameters
The BINDSIZE and ROWS parameters let you to specify the size that the binding array should have in normal boot mode. When loading in normal fashion, SQL*Loader utility does non insert information into the tabular array row by row. Instead, it inserts an unabridged set of rows into the tabular array at once; this set of rows is called a bound array, and either the BINDSIZE or ROWS parameter is responsible for its size.
The BINDSIZE parameter specifies the binding array size in bytes. In the writer's system, this size was 256 000 bytes by default.
BINDSIZE = 512000
As for the ROWS parameter, it does not impose whatever restrictions on the number of bytes in the binding assortment. Instead, information technology imposes a limit on the number of rows that can be contained in each binding array; SQL*Loader multiplies this value in the ROWS parameter by the calculated size of each row in the table. In the writer's organization, the number of rows in the ROWS parameter was 64 by default.
ROWS = 64000
Just for the record! When setting values for both BINDSIZE and ROWS, the SQL*Loader utility uses the lesser of these two values for the bounden array.
The Directly parameter
If Straight is set to truthful (DIRECT=true), the SQL*Loader utility uses the direct boot mode rather than the normal 1. The default value for this parameter is simulated (DIRECT=false), meaning that the normal kick mode should be used by default.
The ERRORS parameter
The ERRORS parameter allows y'all to specify how many errors tin can occur before the SQL*Loader job must be completed. In most systems, this parameter is set up to 50 by default. If you practise not want to tolerate any errors, you can fix this parameter to 0:
ERRORS = 0
LOAD parameter
With the LOAD parameter, y'all can specify how many maximum logical records are allowed to be loaded into the tabular array. By default, you tin can load all records that are contained in the input data file.
LOAD = 10000
LOG parameter
The LOG parameter allows you lot to specify the proper noun of the log file that the SQL*Loader utility should use during the boot process. The log file, which will be shown afterwards, provides a lot of useful information about the SQL*Loader session.
LOG = '/u01/app/oracle/admin/finance/logs/financeload.log'
BAD parameter
The BAD parameter allows yous to specify the proper noun and location of a bad file. In example some records are rejected due to information formatting errors, the SQL*Loader utility will write all these records into the mentioned file. For example, the size of a field may exceed the length specified for it and, as a consequence, be rejected by the SQL*Loader utility.
Note that the records may be rejected not merely by the SQL*Loader utility but also by the database itself. For case, when trying to insert lines with duplicate values of main keys, the database will refuse their insertion. Such entries shall also be placed in the incorrect entry file. If the name of the invalid record file is not explicitly specified, Oracle shall create such a file automatically and apply the default proper name with the name of the management file as the prefix.
BAD = '/u01/app/oracle/load/financeload.bad'
SILENT parameter
Past default, SQL*Loader displays response messages on the screen and thus informs y'all about the loading process. You lot can plow off the display of these messages using the SILENT parameter if you wish. Several values can be set for this parameter. For instance, you tin turn off the display of all types of messages by setting information technology to ALL:
SILENT = ALL
DISCARD and DISCARDMAX parameters
All records that are rejected at boot time because they exercise non see the selection criteria specified in the direction file are placed in the discard file. By default, this file is non created. Oracle will create it just if there are rejected records, and even then only if it was explicitly specified in the control file. The DISCARD parameter, for example, is used to specify the proper noun and location of the rejected records file in the control file:
DISCARD = 'test01/appacle/oradata/load/finance.dsc'
Past default, SQL*Loader does not impose whatever restrictions on the number of records; therefore, all logical records can exist rejected. With the parameter DISCARDMAX, however, you can have and limit the number of rejected records.
Tip. All records are placed in their original format in both the wrong and rejected records files. This makes it easy, particularly when loading large amounts of data, to edit these files properly and use them to reload data that could not exist downloaded during the initial upload session.
PARALLEL selection
The PARALLEL parameter allows you to specify whether SQL*Loader is allowed to run several parallel sessions when loading in direct fashion:
sqlldr USERID=salapati/sammyy1 Command=load1.ctl DIRECT=true PARALLEL=true
Parameter RESUMABLE
With the RESUMABLE parameter, you can enable the Resumable Space Allocation function offered past Oracle (resume operation subsequently the problem of space allocation is eliminated). If this feature is enabled, when a space trouble occurs at the fourth dimension of loading, the chore will simply be paused, and the administrator volition then be notified and will be able to allocate more space so that the task can continue without problems.
The Resumable Space Allocation characteristic is described in more detail in Chapter 8. Past default, RESUMABLE is set to false, which means that Resumable Space Allocation is disabled. To enable information technology, simply set up information technology to truthful (RESUMABLE=true).
The parameter RESUMABLE_NAME
The RESUMABLE_NAME parameter allows you lot to specify a specific kick job that should exist renewable when using the Resumable Infinite Allocation function. Past default, the value to be set for it is formed by combining the user name, session identifier, and instance identifier.
RESUMABLE_NAME = finance1_load
Parameter RESUMABLE_TIMEOUT
The RESUMABLE_TIMEOUT parameter can only exist set to true if the RESUMABLE parameter is prepare to true. Information technology allows yous to ascertain the timeout, i.east. the maximum fourth dimension for which the operation can exist postponed in example of a standoff with a space-related trouble. If the trouble cannot exist solved within this time, the operation volition exist interrupted. By default, this timeout is 7,200 seconds.
RESUMABLE_TIMEOUT = 3600
SKIP parameter
The SKIP parameter is very convenient to use in situations when SQL*Loader interrupts chore execution due to some errors, but already has fourth dimension to fix some lines. It allows you to skip a certain number of lines in the input file when executing SQL*Loader job for the 2nd time. An culling is to truncate the tabular array and restart SQL*Loader job from the very first, which is not very user-friendly if a decent number of rows have already been loaded into the database tables.
SKIP = 235550
In this example, it is assumed that the first fourth dimension the job was interrupted subsequently the successful loading of 235 549 lines. This information can be obtained either by looking at the log file used during this upload session or by performing a query directly to the table itself.
Generation of data during download
SQL*Loader utility allows generating information for loading columns. This means that you can load data without using any data file. Most often, however, data is only generated for one or more columns when performing a general loading from a information file. Below is a list of the data types that SQL*Loader tin can generate.
Constant value. Using the Constant blueprint, you can set a column to a constant value. For instance, in the following example this construction indicates that all rows to exist filled during a given session must have sysadm value loaded_by in the column:
loaded_by Constant "sysadm"
The value of the expression (value). With the EXPRESSION design, you lot can set up the column value of an SQL operation or a PL/SQL office as shown below:
column_name EXPRESSION "SQL string"
Datafile record number. Using the RECNUM construction, information technology is possible to prepare the column tape number, which led to the loading of this row, every bit a value:
record_num RECNUM
Organisation date. The sysdate variable can exist used to prepare the date of data downloads for a cavalcade as a value:
loaded_date sysdate
Sequence (sequence). With the SEQUENCE role you can generate unique values for column loading. In the following example, this function indicates that the electric current maximum loadseq sequence value should be used and that this value should be increased by one every fourth dimension a row is inserted:
loadseq SEQUENCE(max,1)
Call SQL*Loader
There are several ways to call SQL*Plus utility. The standard syntax for calling SQL*Loader looks like this:
SQLLDR keyword=value [,keyword_word=value,...]
Below is an example of how to telephone call SQL*Loader:
$ sqlldr USERID=nicholas/nicholas1 CONTROL=/u01/app/oracle/finance/finance.ctl \
Information=/u01/app/oracle/oradata/load/finance.dat \
LOG=/u01/aapp/oracle/finance/log/finance.log \
ERRORS=0 Straight=true SKIP=235550 RESUMABLE=true RESUMABLE_TIMEOUT=7200
Merely for the record! When calling the SQL*Loader utility from the command line, the backslash grapheme (\) at the finish of each line ways that the command continues on the next line. You can specify control-line parameters past specifying their names as well as their positions.
For example, the parameter responsible for username and password always follows the keyword sqlldr. If the parameter is skipped, Oracle will utilise the default value for this parameter. If you wish, you can add a comma later on each parameter.
It is not difficult to discover that the more parameters you need to use, the more than information yous have to provide in the control line. This approach has 2 drawbacks. First, at that place is confusion when misprints or other errors are made. Secondly, some operating systems may have a limit on the number of characters that can be entered at the command line. Fortunately, the same task tin also be started with the following command, which is much less complicated:
$ sqlldr PARFILE=/u01/app/oracle/admin/finance/load/finance.par
PARFILE represents a parameter file, i.e. a file that can contain values for all command parameters. For instance, for the load specifications shown in this chapter, this file looks similar this:
USERID=nicholas/nicholas1
Command='/u01/app/oracle/admin/finance/finance.ctl'
DATA='/app/oracle/oradata/load/finance.dat'
LOG='/u01/aapp/oracle/admin/finance/log/finance.log'
ERRORS=0
Directly=true
SKIP=235550
RESUMABLE=true
RESUMABLE_TIMEOUT=7200
Using the parameter file is a more elegant approach than entering all the parameters in the command line, and also more logical when you need to regularly perform tasks with the aforementioned parameters. Any option that is specified on the command line will override the value that was set for that parameter within the parameter file.
If you desire to use the command line, but exclude the possibility of someone peeping at the password beingness entered, y'all can call SQL*Loader in the following way:
$ sqlldr CONTROL=control.ctl
In this case, SQL*Loader will display a prompt to enter a username and password.
SQL*Loader log file
The log file of the SQL*Loader utility contains a lot of data about its work session. It tells yous how many records should have been loaded and how many were actually loaded, equally well as what records could not exist loaded and why. In add-on, it describes the columns that were specified for the fields in the SQL*Loader control file. List 13.two gives an example of a typical SQL*Loader log file.
SQL*Loader: Release 11.1.0.0.0 - Production on Sun Aug 24 xiv:04:26 2008
Control File: /u01/app/oracle/admin/fnfactsp/load/exam.ctl
Information File: /u01/app/oracle/admin/fnfactsp/load/test.ctl
Bad File: /u01/app/oracle/admin/fnfactsp/load/test.badl
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 0
Bind array: 64 rows, max of 65536 bytes
Continuation: none specified
Path used: Conventional
Table TBLSTAGE1, loaded when ACTIVITY_TYPE != 0X48(character 'H')
and ACTIVITY_TYPE != 0X54(character 'T')
Insert option in effect for this table: Append
Trailing NULLCOLS option in effect
Column Proper name Position Len Term Encl Datatype
----------------------- -------- ----- ---- ---- ---------
COUNCIL_NUMBER FIRST * , Grapheme
Company NEXT * , CHARACTER
ACTIVITY_TYPE Next * , Grapheme
RECORD_NUMBER NEXT * , CHARACTER
FUND_NUMBER Adjacent * , Character
BASE_ACCOUNT_NMBER Next * , CHARACTER
FUNCTIONAL_CODE NEXT * , CHARACTER
DEFERRED_STATUS Next * , Character
Form Adjacent * , Character
UPDATE_DATE SYSDATE
UPDATED_BY Abiding
Value is 'sysadm'
BATCH_LOADED_BY CONSTANT
Value is 'sysadm'
/* Discarded Records Department: Gives the consummate list of discarded
records, including reasons why they were discarded.*/
/* Rejected records section: contains a complete listing of rejected records
along with a description of the reasons why they were rejected.
Record 1: Discarded - failed all WHEN clauses.
Tape 1527: Discarded - failed all WHEN clauses.
Table TBLSTAGE1:
/* Number of Rows: Gives the number of rows successfully loaded and the number of
The rows are not loaded due to or because they failed the WHEN weather condition, if
whatever. Hither, ii records failed the WHEN condition*/
/* Section number of rows: shows how many rows were successfully loaded
and how much was non loaded due to errors or failure to meet WHEN conditions,
if any. */
1525 Rows successfully loaded.
0 Rows not loaded due to information.
two Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
/* Memory Section: Gives the bind array size chosen for the data load*/.
/* Memory partition: shows what size array was selected for information download*/.
Space for bind array: 99072 bytes(64 rows)
Read buffer bytes: 1048576
/* Logical Records Section: Gives the full records, number of rejected
and discarded records.*/
/* Boolean records section: shows how many full logical records
was skipped, read, rejected and rejected.
Full logical records skipped: 0
Total logical records read: 1527
Total logical records rejected: 0
Total logical records discarded: ii
/*Date Department: Gives the day and appointment of the data load.*/
/*Engagement section: Gives the date and date of the data load.*/
Run started on Sun Mar 06 14:04:26 2009
Run ended on Sun Mar 06 14:04:27 2009
/*Time section: Gives the fourth dimension taken to complete the data load.*/
/*Fourth dimension section: Gives the time taken to consummate the data load.*/
Elapsed time was: 00:00:01.01
CPU time was: 00:00:00:00.27
When studying the periodical file, the main attention should be paid to how many logical records were read and which records were missed, rejected, or rejected. In case yous see any difficulties while performing the task, the periodical file is the first place y'all should look to notice out whether the information records are being loaded or non.
Using go out codes
The log file records a lot of information nearly the boot process, but Oracle besides allows you to capture the exit code after each load. This approach provides an opportunity to bank check the results of loading when it is executed by a cron chore or a shell script. If y'all use a Windows server to schedule boot jobs, yous can use the at control. The following are the central exit codes that can be institute on UNIX and Linux operating systems:
- EX_SUCC 0 ways that all lines were loaded successfully;
- EX_FAIL one indicates that some errors take been detected in the command line or syntax;
- EX_WARN 2 means that some or all lines have been rejected;
- EX_FTL 3 indicates that some errors have occurred in the operating system.
Using the boot method in direct mode
And then far, the SQL*Loader utility has been considered in terms of normal boot mode. Equally nosotros recall, the normal boot manner method involves using INSERT SQL statements to insert data into tables in the size of one binding assortment at a time.
The method of loading in the direct mode does not involve the use of SQL-operators to place data in tables, instead, it involves formatting the Oracle data blocks and writing them directly to database files. This direct writing procedure eliminates well-nigh of the overheads that occur when executing SQL statements to load tables.
Since the direct loading method does not involve a struggle for database resources, it will work much faster than the normal loading method. For loading large amounts of data, the directly way of loading is the most suitable and possibly the only effective method for the simple reason that the execution of loading in normal mode will crave more time than available.
In addition to the obvious advantage of reducing the loading time, the straight style still allows you to rebuild the indexes and perform pre-sorting of information. In item, information technology has such advantages in comparing with the normal mode of downloading.
- Loading is much faster than in the normal mode of loading because SQL-operators are non used.
- To perform data recording to the database, multi-block asynchronous I / O operations are used, so the recording is fast.
- There is an choice to perform the pre-sorting of data using effective sorting subroutines.
- Past setting UNRECOVERABLE to Y (UNRECOVERABLE=Y), information technology is possible to forestall any re-write data from happening during loading.
- By using the temporary storage machinery, index building can be done more efficiently than by using the normal boot mode.
For your information! Normal boot mode will e'er generate rerun records, while straight boot mode will merely generate such records under sure conditions. In addition, indirect way, insertion triggers will not be triggered, which in normal fashion are always triggered during the boot procedure. Finally, in contrast to the normal mode, the direct mode will exclude the possibility of users making any changes to the table to be downloaded with data.
Despite all the higher up, the method of performing the download in directly manner likewise has some serious limitations. In detail, it cannot be used nether the following conditions:
- when using clustered tables;
- when loading data into parent and child tables simultaneously;
- when loading information into VARRAY or BFILE columns;
- when loading among heterogeneous platforms using Oracle Net;
- if you desire to apply SQL-function during loading.
Just for the record! You lot cannot use any SQL-functions in the direct boot mode. If yous need to load large amounts of data and also convert them during the loading procedure, it can atomic number 82 to problems.
Normal kick mode will allow you lot to use SQL functions to convert data, but information technology is very ho-hum compared to directly style. Therefore, to perform the loading of large amounts of information, information technology may be preferable to employ newer technologies for loading and converting data, such as external tables or table functions.
Parameters that can be applied when using the download method in straight mode
Several parameters in SQL*Loader are specifically designed to exist used with the straight way boot method or more suitable for this method than the normal mode kick method. These parameters are described below.
- DIRECT. If you want to employ the direct manner boot method, you must set Direct to true (DIRECT=true).
- DATA_CACHE. The parameter DATA_CACHE is convenient to use in case of repeated loading of the same data or values of date and time (TIMESTAMP) during loading in the straight style. The SQL*Loader utility has to convert date and time information every time information technology encounters them. Therefore, if at that place are duplicate date and fourth dimension values in the downloaded data, setting DATA_CACHE will reduce the number of unnecessary operations to catechumen these values and thus reduce the processing fourth dimension. Past default, the DATA_CACHE parameter allows saving 1000 values in the cache. If there are no duplicate date and time values in the data, or if there are very few, this parameter can exist disabled at all by setting it to 0 (DATA_CACHE=0).
- ROWS. The ROWS parameter is important because it allows you lot to specify how many rows the SQL*Loader utility should read from the input information file before saving inserts to tables. It is used to define the upper limit of the corporeality of data lost in instance of an instance failure during a long SQL*Loader job. Afterwards reading the number of rows specified in this parameter, SQL*Loader will stop loading information until the contents of all data buffers are successfully written to the information files. This process is called data saving. For case, if SQL*Loader tin load almost 10,000 lines per minute, setting the ROWS parameter to 150,000 (ROWS=150000) will cause information to exist saved every 15 minutes.
- UNRECOVERABLE. The UNRECOVERABLE parameter minimizes the use of the rerun log during data loading in the directly manner (information technology is defined in the command file).
- SKIP_INDEX_MAINTENANCE. The SKIP_INDEX_MAINTENANCE parameter, when enabled (SKIP_INDEX_MAINTENANCE=truthful), tells SQL*Loader not to worry nigh alphabetize maintenance during loading. It is set to false by default.
- SKIP_UNUSABLE_INDEXES. Setting it to truthful for SKIP_UNUSABLE_INDEXES will ensure that SQL*Loader boots even tables whose indexes are in an unusable country. SQL*Loader will be serviced, however, these indexes will non exist used. The default value for this parameter depends on which value is selected for the SKIP_UNUSABLE_INDEXES initialization parameter, which is prepare to truthful by default.
- SORTED_INDEXES. The SORTED_INDEXES parameter notifies SQL*Loader that information has been sorted at the level of certain indices, which helps to speed up the loading procedure.
- COLUMNARRAYROWS. This parameter allows you to specify how many lines should be loaded before building the thread buffer. For example, if y'all ready it to 100 000 (COLUMNARRAYROWS=100000), 100 000 lines volition be loaded showtime. Therefore, it turns out that the size of the array of columns during loading in the direct style will depend on the value of this parameter. The default value of this parameter was 5000 rows for the author on a UNIX server.
- STREAMSIZE. The STREAMSIZE parameter allows setting the flow buffer size. For the author on a UNIX server, for example, this size was 256 000 lines by default; if yous desire to increase it, you could set the STREAMSIZE parameter, for case, STREAMSIZE=51200.
- MULTITHREADING. When the MULTITHREADING parameter is enabled, operations to convert column arrays to thread buffers and then load these thread buffers are executed in parallel. On machines with several CPUs this parameter is enabled by default (set to true). If you desire, yous can turn information technology off past setting information technology to false (MULTITHREADING =false).
Control of restrictions and triggers when using the boot method in direct mode
The method of loading in the straight mode implies inserting data directly into data files by formatting data blocks. Since INSERT operators are non used, there is no systematic application of table restrictions and triggers in the direct loading mode. Instead, all triggers are disabled, likewise as some integrity constraints.
The SQL*Loader utility automatically disables all external keys and check integrity restrictions, only it still supports non-zilch, unique, and associated with principal keys. When a job is finished, SQL*Loader automatically turns on all disabled constraints again if the REENABLE design was specified. Otherwise, you will demand to enable them manually. As for triggers, they are always automatically re-enabled after the boot process has finished.
Tips for optimal apply of SQL*Loader
To use SQL*Loader in an optimal way, especially when loading big amounts of data and/or having multiple indexes and restrictions associated with tables in a database, it is recommended to practise the following.
- Attempt to apply the boot method in direct mode as often as possible. It works much faster than the kick method in normal mode.
- Use it wherever possible (with direct kick mode), the UNRECOVERABLE=true selection. This will salve a decent corporeality of time considering new downloadable data will not demand to be fixed in a rerun log file. The ability to perform media recovery nevertheless remains valid for all other database users, plus a new SQL*Loader session tin ever be started if a problem occurs.
- Minimize the utilise of the NULLIF and DEFAULTIF parameters. These constructs must always exist tested on every line for which they are applied.
- Limit the number of data type and character prepare conversion operations because they slow down the processing.
- Wherever possible, use positions rather than separators for the fields. The SQL*Loader utility is much faster to movement from field to field when their positions are provided.
- Display physical and logical records in a ane-to-1 manner.
- Disable the restrictions before starting the boot process because they will ho-hum it down. Of form, when you turn on the restrictions again sometimes errors may appear, but a much faster execution of the data loading is worth information technology, especially in the case of large tables.
- Specify the SORTED_INDEXES design in case you lot apply the straight loading method to optimize the speed of loading.
- Delete indexes associated with tables before starting the loading process in instance of big data volumes beingness loaded. If it is impossible to remove indexes, you tin can brand them unusable and use SKIP_UNUSABLE_INDEXES construction during loading, and SKIP_INDEX_MAINTENANCE construction during loading.
Some useful tricks for loading data using SQL*Loader
Using SQL*Loader is an effective approach, merely not without its share of tricks. This section describes how to perform some special types of operations while loading data.
Using WHEN structure during data upload operations
The WHEN construct can be used during data upload operations to limit uploaded information to only those strings that meet certain conditions. For instance, it tin can be used to select from a data file only those records that contain a field that meets specific criteria. Beneath is an instance demonstrating the application of the WHEN constructs in the SQL*Loader control file:
LOAD Data
INFILE *
INTO TABLE stagetbl
APPEND
WHEN (activity_type <>'H') and (activity_type <>'T')
FIELDS TERMINATED BY ','
Trailing NULLCOLS
/* Here are the columns of the table... */
BEGINDATA
/* Here comes the data...*/
Here, the condition in the WHEN construct specifies that all entries in which the field corresponding to the activity_type column in the stagetbl table practise not contain either H or T must be rejected.
Loading the username in the table
Y'all tin can use a pseudo-variable user to insert a username into a tabular array during the boot process. Below is an example to illustrate how to employ this variable. Note that the stagetb1 target table must necessarily comprise a column named loaded_by in lodge for the SQL*Loader utility to exist able to insert the username into it.
LOAD DATA
INFILE *
INTO Tabular array stagetbl
INSERT
(loaded_by "USER")
/* Hither are the columns of the tabular array, and then the data itself...
Loading large data fields into a tabular array
When trying to load into a table any field larger than 255 bytes, even if the VARCHAR(2000) or CLOB type cavalcade is assigned, the SQL*Loader utility volition not be able to load data, and therefore it will generate an error message Field in data file exceeds the maximum length.
To load a large field, it is necessary to specify in the command file the size of the corresponding column in the table when displaying the columns of the table on the data fields, as shown in the following instance (where the corresponding column has the proper noun text):
LOAD DATA
INFILE '/u01/app/oracle/oradata/load/testload.txt'
INSERT INTO Tabular array test123
FIELDS TERMINATED By ','
(text CHAR(2000))
Loading the sequence number in the tabular array
Suppose there is a sequence named test_seq and information technology is required to increase its number when loading each new data record into the tabular array. This behavior can be ensured in the following manner:
LOAD DATA
INFILE '/u01/app/oracle/oradata/load/testload.txt'
INSERT INTO TABLE test123
(test_seq.nextval, . .)
Loading data from a table into an ASCII file
Sometimes it is necessary to extract data from the database table into flat files, for example, to use them to upload data to Oracle tables located elsewhere. If you have a lot of such tables, you can write circuitous scripts, just if you lot are talking about merely a few tables, the following simple method of data extraction using SQL*Plus commands is as well quite suitable:
SET TERMOUT OFF
Ready PAGESIZE 0
SET ECHO OFF
Ready FEED OFF
SET Caput OFF
Set LINESIZE 100
COLUMN customer_id FORMAT 999,999
Column first_name FORMAT a15
Cavalcade last_name FORMAT a25
SPOOL test.txt
SELECT customer_id,first_name,last_name FROM client;
SPOOL OFF
Yous tin also use UTL_FILE package to upload data to text files.
Deletion of indexes before loading large data arrays
There are two main reasons why you should seriously consider deleting the indexes associated with a large tabular array before loading data in direct mode using the NOLOGGING option. First, loading together with the indexes supplied with the table data may take more time. Secondly, if the indexes are left active, changes in their structure during loading will generate redo records.
Tip. When choosing the option to load data using the NOLOGGING option, a decent amount of redo records volition be generated to betoken the changes made to the indexes. In improver, some more than redo information will be generated to back up the data dictionary, fifty-fifty during the information loading operation itself with selection NOLOGGING. Therefore, the best strategy in this example is to delete the indexes and recreate them after creating the tables.
When loading in the direct mode, somewhere on the halfway point, an instance may crash, the space required by SQL*Loader utility to perform alphabetize update may run out, or duplicate index key values may occur. All such situations are referred to as the condition of bringing the indexes to an unusable state because afterwards an example is restored, the indexes get unusable. To avoid these situations, it may also be better to create indexes after the boot process is finished.
Executing data loading in several tables
You can use the aforementioned SQL*Loader utility to load information into several tables. Here is an example of how to load data into two tables at once:
LOAD Information
INFILE *
INSERT
INTO TABLE dept
WHEN recid = i
(recid FILLER POSITION(1:one) INTEGER EXTERNAL,
POSITION(3:iv) INTEGER EXTERNAL,
dname POSITION(8:21) CHAR)
INTO TABLE emp
WHEN recid <> ane
(recid FILLER POSITION(1:ane) INTEGER EXTERNAL,
POSITION(3:6) INTEGER EXTERNAL,
ename POSITION(8:17) CHAR,
POSITION(19:20) INTEGER EXTERNAL)
In this example, data from the same information file is loaded simultaneously into ii tables – dept and emp – based on whether the record field contains the value of 1 or not.
SQL*Loader mistake code interception
Below is a simple case of how you can intercept SQL*Loader error codes:
$ sqlldr PARFILE=examination.par
retcode=$?
if [[retcode !=ii ]]].
then .
mv ${ImpDir}/${Fil} ${InvalidLoadDir}/.${Dstamp}.${Fil}
writeLog $func "Load Error" "load fault:${retcode} on file ${Fil}".
else
sqlplus / ___EOF
/* Here you tin identify whatsoever SQL-operators for data processing,
that were uploaded successfully */
___EOF
Downloading XML information into Oracle XML database
The SQL*Loader utility supports using the XML information type for columns. If there is such a column, information technology can therefore be used to load XML information into the tabular array. SQL*Loader takes XML columns as CLOB (Graphic symbol Big Object) columns.
In addition, Oracle allows you to load XML data both from the primary data file and from an external LOB file (A large Object is a large object), and use both fixed-length and separator fields, as well every bit read all the contents of the file into a single LOB field.
Tags: Oracle, Oracle Database, Oracle SQL
Source: https://www.sqlsplus.com/sqlloader-upload-to-oracle/
0 Response to "Sql Loader: Uploading Files in Oracle Database"
Post a Comment