INSERT ALL and MERGE.When to use each tool?
NuGet Packages
# Core Oracle driver (required for both Dapper and ADO.NET) dotnet add package Oracle.ManagedDataAccess.Core # For Dapper dotnet add package Dapper # For ADO.NET — same Oracle driver, no extra package needed
Connection String
{
"Database": {
"ConnectionString": "User Id=myuser;Password=mypass;Data Source=myserver:1521/MYDB;"
}
}
Shared Connection Helper
using Oracle.ManagedDataAccess.Client; using Microsoft.Extensions.Options; // .NET 8: primary constructor — no boilerplate field assignment public sealed class DbConnectionFactory(IOptions<DatabaseOptions> opts) { private readonly string _connectionString = opts.Value.ConnectionString; // Returns OracleConnection directly — no IDbConnection wrapper needed. // Dapper accepts OracleConnection (it implements IDbConnection internally). public OracleConnection Create() => new(_connectionString); } // Strongly-typed options class (registered via appsettings.json) public sealed record DatabaseOptions { public required string ConnectionString { get; init; } } // Register in Program.cs (.NET 8 minimal hosting) builder.Services.Configure<DatabaseOptions>( builder.Configuration.GetSection("Database")); builder.Services.AddSingleton<DbConnectionFactory>();
Oracle.ManagedDataAccess.Core for .NET Core/5+. The older Oracle.ManagedDataAccess is for .NET Framework only.Sample Table & C# Model Used Throughout
CREATE TABLE EMPLOYEES ( ID NUMBER(10) PRIMARY KEY, NAME VARCHAR2(100) NOT NULL, DEPT VARCHAR2(50), SALARY NUMBER(10,2), HIRE_DATE DATE DEFAULT SYSDATE ); -- Audit table referenced in transaction examples CREATE TABLE AUDIT_LOG ( ID NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, EVENT VARCHAR2(100), ROW_COUNT NUMBER, LOGGED_AT DATE DEFAULT SYSDATE ); -- Error log table used in FORALL SAVE EXCEPTIONS examples CREATE TABLE IMPORT_ERRORS ( ROW_INDEX NUMBER, ERROR_CODE NUMBER, LOGGED_AT DATE DEFAULT SYSDATE ); -- Global Temporary Table for bulk update/upsert via temp table pattern CREATE GLOBAL TEMPORARY TABLE EMPLOYEE_UPDATES ( ID NUMBER(10), NAME VARCHAR2(100), SALARY NUMBER(10,2) ) ON COMMIT DELETE ROWS; -- auto-cleared after each transaction commit
The C# Model Used in Every Example
// .NET 8 record — immutable, value-based equality, concise syntax. // Property names match Oracle parameter names (:Id, :Name, :Dept, :Salary). // Used as-is with Dapper, decomposed into arrays for ADO.NET Array Binding. public record Employee( int Id, string Name, string? Dept, decimal Salary ); // Sample data — used throughout all examples List<Employee> employees = Enumerable.Range(1, 10_000) .Select(i => new Employee( Id: i, Name: $"Employee {i}", Dept: i % 2 == 0 ? "Engineering" : "Sales", Salary: 50_000 + (i * 10) )) .ToList();
Dapper's ExecuteAsync with a list of objects is the easiest approach. Dapper sends them as parameterized batches.
using Dapper; using Oracle.ManagedDataAccess.Client; // .NET 8 primary constructor — injects the connection string via DI factory public class EmployeeRepository(DbConnectionFactory db) { // Oracle named params use : prefix — Dapper matches them to property names private const string InsertSql = @" INSERT INTO EMPLOYEES (ID, NAME, DEPT, SALARY) VALUES (:Id, :Name, :Dept, :Salary)"; public async Task BulkInsertAsync(IEnumerable<Employee> employees) { await using var conn = db.Create(); await conn.OpenAsync(); // Dapper iterates the IEnumerable and maps each Employee's // properties to :Id, :Name, :Dept, :Salary automatically. // No manual projection. No loop. One call. int rowsAffected = await conn.ExecuteAsync(InsertSql, employees); Console.WriteLine($"Inserted {rowsAffected} rows"); } } // ── Usage ─────────────────────────────────────────────────────────── // Employee record and sample data are defined in the Setup module. // Register the repo with DI, then resolve and call: await repo.BulkInsertAsync(employees);
ExecuteAsync with a list still sends individual statements — it just handles parameter mapping for you. For truly massive datasets (500k+ rows), prefer ADO.NET Array Binding or OracleBulkCopy.INSERT ALL — true single-statement multi-row insert
Oracle's INSERT ALL lets you push multiple rows in one SQL statement. Useful for small-to-medium batches (up to ~1000 rows at once).
using System.Text; // for StringBuilder using Dapper; public async Task InsertAllAsync(List<Employee> employees) { // Oracle INSERT ALL syntax: // INSERT ALL // INTO t (cols) VALUES (:p1, :p2) // INTO t (cols) VALUES (:p3, :p4) // SELECT 1 FROM DUAL ← required terminator, always at the END var sb = new StringBuilder("INSERT ALL\n"); var parameters = new DynamicParameters(); for (int i = 0; i < employees.Count; i++) { sb.AppendLine($" INTO EMPLOYEES (ID, NAME, DEPT, SALARY)"); sb.AppendLine($" VALUES (:id{i}, :name{i}, :dept{i}, :salary{i})"); parameters.Add($"id{i}", employees[i].Id); parameters.Add($"name{i}", employees[i].Name); parameters.Add($"dept{i}", employees[i].Dept ?? ""); parameters.Add($"salary{i}", employees[i].Salary); } sb.Append("SELECT 1 FROM DUAL"); // required terminator — always last await using var conn = db.Create(); await conn.OpenAsync(); await conn.ExecuteAsync(sb.ToString(), parameters); }
Array Binding is Oracle's native mechanism to send arrays of values in one command. This is dramatically faster than looping.
using Oracle.ManagedDataAccess.Client; public async Task BulkInsertArrayBindingAsync(List<Employee> employees) { // Step 1: Build column arrays int count = employees.Count; int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); string[] depts = employees.Select(e => e.Dept ?? "").ToArray(); // null → "" → Oracle stores as NULL decimal[] salaries = employees.Select(e => e.Salary).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); await using var cmd = conn.CreateCommand(); cmd.CommandText = @" INSERT INTO EMPLOYEES (ID, NAME, DEPT, SALARY) VALUES (:Id, :Name, :Dept, :Salary)"; // Step 2: Set ArrayBindCount — THIS IS THE KEY cmd.ArrayBindCount = count; // Add parameters with array values cmd.Parameters.Add("Id", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("Name", OracleDbType.Varchar2, names, ParameterDirection.Input); cmd.Parameters.Add("Dept", OracleDbType.Varchar2, depts, ParameterDirection.Input); cmd.Parameters.Add("Salary", OracleDbType.Decimal, salaries, ParameterDirection.Input); // Step 3: Execute ONCE for all rows await cmd.ExecuteNonQueryAsync(); Console.WriteLine($"Inserted {count} rows in ONE round-trip"); }
OracleBulkCopy is the fastest method. It bypasses SQL parsing entirely and streams data directly into Oracle's storage layer — similar to SQL Server's SqlBulkCopy.
using Oracle.ManagedDataAccess.Client; using System.Data; public async Task BulkCopyAsync(List<Employee> employees) { // Step 1: Convert list to DataTable (OracleBulkCopy requires this) var table = new DataTable(); table.Columns.Add("ID", typeof(int)); table.Columns.Add("NAME", typeof(string)); table.Columns.Add("DEPT", typeof(string)); table.Columns.Add("SALARY", typeof(decimal)); foreach (var e in employees) table.Rows.Add(e.Id, e.Name, e.Dept, e.Salary); await using var conn = db.Create(); await conn.OpenAsync(); // Step 2: Configure OracleBulkCopy // Note: OracleBulkCopy implements IDisposable (not IAsyncDisposable), // so 'using var' (sync dispose) is correct here — not 'await using'. using var bulkCopy = new OracleBulkCopy(conn) { DestinationTableName = "EMPLOYEES", BatchSize = 5000, // rows per batch BulkCopyTimeout = 120 // seconds }; // Step 3: Map DataTable columns → Oracle columns (optional if names match) bulkCopy.ColumnMappings.Add("ID", "ID"); bulkCopy.ColumnMappings.Add("NAME", "NAME"); bulkCopy.ColumnMappings.Add("DEPT", "DEPT"); bulkCopy.ColumnMappings.Add("SALARY", "SALARY"); // Step 4: Stream data into Oracle await bulkCopy.WriteToServerAsync(table); Console.WriteLine($"BulkCopy done: {employees.Count} rows"); }
BatchSize to avoid memory pressure (5,000–50,000 is typical).Pass a list of objects — Dapper updates each row using matching parameters. Simple and readable.
public async Task BulkUpdateAsync(IEnumerable<Employee> employees) { const string sql = @" UPDATE EMPLOYEES SET NAME = :Name, DEPT = :Dept, SALARY = :Salary WHERE ID = :Id"; await using var conn = db.Create(); await conn.OpenAsync(); // Dapper issues one UPDATE per employee — simple but not as fast as // Array Binding. Use for small-to-medium lists (< ~5,000 rows). await conn.ExecuteAsync(sql, employees); }
Conditional Bulk Update — single WHERE clause
When the same condition applies to all rows, a single parameterised UPDATE is the fastest and simplest approach — no looping, no arrays.
// Give all Engineering employees under 100,000 salary a 10% raise. // One SQL statement, one round-trip — no loop needed. public async Task RaiseSalaryByDeptAsync(string dept, decimal maxSalary) { const string sql = @" UPDATE EMPLOYEES SET SALARY = SALARY * 1.10 WHERE DEPT = :Dept AND SALARY < :MaxSalary"; await using var conn = db.Create(); await conn.OpenAsync(); int rows = await conn.ExecuteAsync(sql, new { Dept = dept, MaxSalary = maxSalary }); Console.WriteLine($"Raised salary for {rows} employees in {dept}"); }
Array Binding for updates works the same way as inserts — set ArrayBindCount and pass arrays.
public async Task BulkUpdateArrayBindAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); decimal[] salaries = employees.Select(e => e.Salary).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); await using var cmd = conn.CreateCommand(); cmd.CommandText = @" UPDATE EMPLOYEES SET NAME = :Name, SALARY = :Salary WHERE ID = :Id"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("Name", OracleDbType.Varchar2, names, ParameterDirection.Input); cmd.Parameters.Add("Salary", OracleDbType.Decimal, salaries, ParameterDirection.Input); cmd.Parameters.Add("Id", OracleDbType.Int32, ids, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); }
For very large updates, load data into a Global Temporary Table first, then join-update. This is Oracle's most scalable bulk update pattern.
-- Create once (in schema migrations) CREATE GLOBAL TEMPORARY TABLE EMPLOYEE_UPDATES ( ID NUMBER(10), NAME VARCHAR2(100), SALARY NUMBER(10,2) ) ON COMMIT DELETE ROWS; -- auto-clears after each transaction
// ⚠️ Important: OracleBulkCopy cannot join an existing transaction. // It manages its own internal transaction. This is why the GTT pattern works: // 1. BulkCopy loads data into the GTT (its own internal tx commits the GTT data) // 2. The UPDATE runs on the MAIN connection's transaction (tx) // 3. When tx.CommitAsync() fires, the UPDATE commits atomically. // The GTT data is visible to the same session regardless of BulkCopy's tx. public async Task BulkUpdateViaTempTableAsync(List<Employee> employees) { await using var conn = db.Create(); await conn.OpenAsync(); await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { // 1. Load staging data into GTT via BulkCopy (UseInternalTransaction // tells BulkCopy to commit its own mini-transaction so data is visible) // ToDataTable is the generic helper defined in Module 9 (List-of-Objects section). var dt = DataTableConverter.ToDataTable(employees); using var bulk = new OracleBulkCopy(conn, OracleBulkCopyOptions.UseInternalTransaction) { DestinationTableName = "EMPLOYEE_UPDATES" }; await bulk.WriteToServerAsync(dt); // 2. Join-update: UPDATE main table FROM GTT data. // This cmd runs on tx, so it rolls back if anything fails. await using var cmd = conn.CreateCommand(); cmd.Transaction = tx; cmd.CommandText = @" UPDATE EMPLOYEES e SET (e.NAME, e.SALARY) = ( SELECT u.NAME, u.SALARY FROM EMPLOYEE_UPDATES u WHERE u.ID = e.ID ) WHERE EXISTS ( SELECT 1 FROM EMPLOYEE_UPDATES u WHERE u.ID = e.ID )"; await cmd.ExecuteNonQueryAsync(); await tx.CommitAsync(); // GTT auto-clears on commit (ON COMMIT DELETE ROWS) } catch { await tx.RollbackAsync(); throw; } }
Strategy 1 — Delete by IDs using IN clause (Dapper)
public async Task DeleteByIdsAsync(IEnumerable<int> ids) { // Oracle + Dapper: IN clause needs parentheses around the parameter. // Dapper expands the array into individual bind variables automatically. const string sql = "DELETE FROM EMPLOYEES WHERE ID IN (:Ids)"; await using var conn = db.Create(); await conn.OpenAsync(); await conn.ExecuteAsync(sql, new { Ids = ids.ToArray() }); }
IN clause has a 1,000 item limit. For larger sets, chunk the IDs or use a temp table + DELETE JOIN strategy below.Strategy 2 — Chunked delete (Dapper)
public async Task DeleteChunkedAsync(List<int> allIds, int chunkSize = 900) { // Open once and reuse across all chunks — avoid reconnect overhead await using var conn = db.Create(); await conn.OpenAsync(); // Split into 900-item chunks (safely under Oracle's 1,000 IN-list limit) var chunks = allIds .Select((id, i) => (id, i)) .GroupBy(x => x.i / chunkSize) .Select(g => g.Select(x => x.id).ToArray()); foreach (var chunk in chunks) await conn.ExecuteAsync( "DELETE FROM EMPLOYEES WHERE ID IN (:Ids)", new { Ids = chunk }); }
Strategy 3 — Delete with condition (fastest)
When all rows share the same condition, a single parameterised DELETE is optimal — no loops, no arrays, one round-trip.
// Delete all temporary employees hired more than 5 years ago. // One SQL statement handles all qualifying rows atomically. public async Task<int> DeleteStaleTempsAsync() { await using var conn = db.Create(); await conn.OpenAsync(); return await conn.ExecuteAsync(@" DELETE FROM EMPLOYEES WHERE DEPT = :Dept AND HIRE_DATE < :CutoffDate", new { Dept = "Temp", CutoffDate = DateTime.UtcNow.AddYears(-5) }); }
Strategy 4 — Array Binding delete (ADO.NET)
public async Task BulkDeleteArrayBindAsync(int[] ids) { await using var conn = db.Create(); await conn.OpenAsync(); await using var cmd = conn.CreateCommand(); cmd.CommandText = "DELETE FROM EMPLOYEES WHERE ID = :Id"; cmd.ArrayBindCount = ids.Length; cmd.Parameters.Add("Id", OracleDbType.Int32, ids, ParameterDirection.Input); int deleted = await cmd.ExecuteNonQueryAsync(); Console.WriteLine($"Deleted: {deleted} rows"); }
MERGE statement is the built-in way to do this in one atomic operation.The Problem Upsert Solves
Imagine you're syncing employee records from an HR system into your database every night. Some employees already exist (need updating), some are brand new (need inserting). Without upsert, you'd have to:
WHEN MATCHED = row found → run UPDATE. WHEN NOT MATCHED = row missing → run INSERT. You can even add DELETE as a third clause.Conceptual flow — what Oracle does internally per row
FOR EACH row IN source_data: IF EXISTS (SELECT 1 FROM target WHERE target.ID = row.ID): UPDATE target SET ... WHERE target.ID = row.ID -- WHEN MATCHED ELSE: INSERT INTO target ... -- WHEN NOT MATCHED -- Oracle does this for ALL rows in ONE SQL statement. -- You never write the IF/ELSE yourself — MERGE does it for you.
MERGE with Dapper — small-to-medium datasets
SELECT :param FROM DUAL as the source, which means Dapper executes one MERGE per row in the list. It is simple and correct, but not bulk. Use it for up to ~5,000 rows. For larger datasets, use the Temp Table MERGE below.public async Task BulkUpsertAsync(IEnumerable<Employee> employees) { // SELECT :params FROM DUAL as source — one MERGE execution per employee. // Dapper iterates the collection and binds each Employee's properties. const string sql = @" MERGE INTO EMPLOYEES target USING ( SELECT :Id AS ID, :Name AS NAME, :Dept AS DEPT, :Salary AS SALARY FROM DUAL ) source ON (target.ID = source.ID) WHEN MATCHED THEN UPDATE SET target.NAME = source.NAME, target.DEPT = source.DEPT, target.SALARY = source.SALARY WHEN NOT MATCHED THEN INSERT (ID, NAME, DEPT, SALARY) VALUES (source.ID, source.NAME, source.DEPT, source.SALARY)"; await using var conn = db.Create(); await conn.OpenAsync(); await conn.ExecuteAsync(sql, employees); }
MERGE via Temp Table (Best for large datasets)
public async Task BulkUpsertViaTempAsync(List<Employee> employees) { await using var conn = db.Create(); await conn.OpenAsync(); await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { // 1. BulkCopy into the GTT. // UseInternalTransaction: BulkCopy commits its own mini-transaction // so the GTT rows are visible to the same session for the MERGE below. using var bulk = new OracleBulkCopy(conn, OracleBulkCopyOptions.UseInternalTransaction) { DestinationTableName = "EMPLOYEE_UPDATES" }; // DataTableConverter.ToDataTable is the generic helper from Module 9. await bulk.WriteToServerAsync(DataTableConverter.ToDataTable(employees)); // 2. MERGE from GTT → EMPLOYEES (runs on the main tx) await using var cmd = conn.CreateCommand(); cmd.Transaction = tx; cmd.CommandText = @" MERGE INTO EMPLOYEES tgt USING EMPLOYEE_UPDATES src ON (tgt.ID = src.ID) WHEN MATCHED THEN UPDATE SET tgt.NAME = src.NAME, tgt.SALARY = src.SALARY WHEN NOT MATCHED THEN INSERT (ID, NAME, SALARY) VALUES (src.ID, src.NAME, src.SALARY)"; await cmd.ExecuteNonQueryAsync(); await tx.CommitAsync(); // GTT auto-cleared (ON COMMIT DELETE ROWS) } catch { await tx.RollbackAsync(); throw; } }
Handling NULLs in Array Binding
// ── Approach A: use empty string for nullable strings ───────────── // Oracle treats '' as NULL for VARCHAR2, so this is safe. string[] depts = employees .Select(e => e.Dept ?? "") // null → "" → stored as NULL in Oracle VARCHAR2 .ToArray(); // ── Approach B: use OracleParameterStatus for explicit NULL control ─ // Required when you need to distinguish between empty string and NULL, // or when working with nullable numeric/date columns. decimal?[] nullableSalaries = employees .Select(e => (decimal?)e.Salary) // keep as nullable decimal .ToArray(); var salaryValues = nullableSalaries .Select(s => s.HasValue ? s.Value : (decimal)0) // placeholder value for nulls .ToArray(); var salaryStatus = nullableSalaries .Select(s => s.HasValue ? OracleParameterStatus.Success // use the value : OracleParameterStatus.NullInsert) // ignore value, insert NULL .ToArray(); var param = new OracleParameter { ParameterName = "Salary", OracleDbType = OracleDbType.Decimal, Direction = ParameterDirection.Input, Value = salaryValues, Status = salaryStatus // Status overrides Value wherever NullInsert }; cmd.Parameters.Add(param);
Calling Stored Procedures with Array Binding
public async Task CallBulkProcAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); await using var cmd = conn.CreateCommand(); cmd.CommandText = "PKG_EMPLOYEE.BULK_INSERT"; // Package.Procedure cmd.CommandType = CommandType.StoredProcedure; // Set ArrayBindCount BEFORE adding parameters cmd.ArrayBindCount = employees.Count; // Input arrays — one per IN collection parameter in the proc cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); // Scalar OUTPUT parameter — use named-property syntax, not positional constructor cmd.Parameters.Add(new OracleParameter { ParameterName = "p_result", OracleDbType = OracleDbType.Varchar2, Size = 200, Direction = ParameterDirection.Output }); await cmd.ExecuteNonQueryAsync(); Console.WriteLine(cmd.Parameters["p_result"].Value); }
Reading from a CSV and bulk loading
using Oracle.ManagedDataAccess.Client; public async Task LoadFromCsvAsync(string csvPath) { // Parse CSV into DataTable var table = new DataTable(); table.Columns.Add("ID", typeof(int)); table.Columns.Add("NAME", typeof(string)); table.Columns.Add("DEPT", typeof(string)); table.Columns.Add("SALARY", typeof(decimal)); foreach (var line in File.ReadLines(csvPath).Skip(1)) // skip header { var cols = line.Split(','); table.Rows.Add(int.Parse(cols[0]), cols[1], cols[2], decimal.Parse(cols[3])); } await using var conn = db.Create(); await conn.OpenAsync(); using var bulk = new OracleBulkCopy(conn, OracleBulkCopyOptions.UseInternalTransaction) // auto-commit per batch { DestinationTableName = "EMPLOYEES", BatchSize = 10_000, BulkCopyTimeout = 300 }; // Wire up progress notifications every 5000 rows bulk.NotifyAfter = 5000; bulk.OracleRowsCopied += (s, e) => Console.WriteLine($" → {e.RowsCopied:N0} rows copied so far..."); await bulk.WriteToServerAsync(table); Console.WriteLine($"Done! {table.Rows.Count:N0} rows loaded."); }
Streaming with IDataReader (memory-efficient for huge files)
// IDataReader lets OracleBulkCopy stream rows without loading all into memory. // Use EnumerableDataReader<T> defined in the List-of-Objects module. public async Task StreamBulkCopyAsync(IAsyncEnumerable<Employee> source) { await using var conn = db.Create(); await conn.OpenAsync(); // OracleBulkCopy implements IDisposable only — using (not await using) is correct using var bulk = new OracleBulkCopy(conn) { DestinationTableName = "EMPLOYEES", BatchSize = 5000 }; // Materialize IAsyncEnumerable into List (no System.Linq.Async package needed) var list = new List<Employee>(); await foreach (var emp in source) list.Add(emp); var reader = new EnumerableDataReader<Employee>(list, ("ID", e => (object)e.Id), ("NAME", e => e.Name), ("DEPT", e => e.Dept), ("SALARY", e => e.Salary) ); await bulk.WriteToServerAsync(reader); }
• Dapper → accepts
IEnumerable<T> directly (it reads properties by name).• Array Binding → needs
T[] per column (project your list).• OracleBulkCopy → needs a
DataTable or IDataReader.• FORALL stored proc → needs Oracle Collection type (VARRAY / Nested Table) passed via UDT.
Dapper is the friendliest API here. Pass your List<Employee> straight in — Dapper uses reflection to match property names to Oracle parameters.
// Same canonical model from Setup — properties map to :Id, :Name, :Dept, :Salary public record Employee( int Id, string Name, string? Dept, // nullable — Dapper sends NULL if omitted decimal Salary ); public async Task InsertListAsync(List<Employee> employees) { const string sql = @" INSERT INTO EMPLOYEES (ID, NAME, DEPT, SALARY) VALUES (:Id, :Name, :Dept, :Salary)"; await using var conn = db.Create(); await conn.OpenAsync(); // Dapper iterates List<Employee> automatically. // It maps Employee.Id → :Id, Employee.Name → :Name, etc. // No manual projection needed. await conn.ExecuteAsync(sql, employees); } // Also works with anonymous objects var rows = employees.Select(e => new { e.Id, e.Name, Dept: e.Dept ?? "Unknown", // transform inline Salary: e.Salary * 1.05m // apply business rule }); await conn.ExecuteAsync(sql, rows);
:id, :ID, and :Id all match Employee.Id. You don't need to rename your properties.Nested / complex objects — flatten before passing
public record OrderLine(int OrderId, Product Product, int Qty); public record Product(int Id, string Name, decimal Price); public async Task InsertOrderLinesAsync(List<OrderLine> orderLines) { // Flatten nested objects to a shape that matches SQL parameters. // Dapper reads property names from the anonymous object. var rows = orderLines.Select(ol => new { OrderId = ol.OrderId, ProductId = ol.Product.Id, UnitPrice = ol.Product.Price, Quantity = ol.Qty, LineTotal = ol.Product.Price * ol.Qty }); await using var conn = db.Create(); await conn.OpenAsync(); await conn.ExecuteAsync(@" INSERT INTO ORDER_LINES (ORDER_ID, PRODUCT_ID, UNIT_PRICE, QTY, TOTAL) VALUES (:OrderId, :ProductId, :UnitPrice, :Quantity, :LineTotal)", rows); }
Array Binding requires you to decompose your list of objects into one typed array per column. The pattern is always the same: .Select(e => e.Property).ToArray().
public async Task BulkInsertFromListAsync(List<Employee> employees) { // ── Decompose List<Employee> into per-column arrays ────────────── int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); string[] depts = employees.Select(e => e.Dept ?? "").ToArray(); decimal[] salaries = employees.Select(e => e.Salary).ToArray(); // ───────────────────────────────────────────────────────────────── await using var conn = db.Create(); await conn.OpenAsync(); await using var cmd = conn.CreateCommand(); cmd.CommandText = "INSERT INTO EMPLOYEES (ID,NAME,DEPT,SALARY) VALUES (:Id,:Name,:Dept,:Salary)"; cmd.ArrayBindCount = employees.Count; // MUST be set first cmd.Parameters.Add("Id", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("Name", OracleDbType.Varchar2, names, ParameterDirection.Input); cmd.Parameters.Add("Dept", OracleDbType.Varchar2, depts, ParameterDirection.Input); cmd.Parameters.Add("Salary", OracleDbType.Decimal, salaries, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); // ONE call for ALL rows } // ── Reusable generic helper to extract a column array ── private static TCol[] Col<TObj, TCol>( IEnumerable<TObj> list, Func<TObj, TCol> selector) => list.Select(selector).ToArray(); // Usage var ids = Col(employees, e => e.Id); var names = Col(employees, e => e.Name);
OracleBulkCopy needs a DataTable. Here's a clean, reusable generic converter so you never write this boilerplate again.
using System.ComponentModel.DataAnnotations.Schema; using System.Reflection; /// <summary> /// Converts any IEnumerable<T> to a DataTable using reflection. /// Column names: uses [Column("ORACLE_NAME")] attribute if present, /// otherwise falls back to the property name uppercased. /// Nullable types are unwrapped so DataTable accepts null correctly. /// </summary> public static class DataTableConverter { public static DataTable ToDataTable<T>(IEnumerable<T> items) { var table = new DataTable(typeof(T).Name); PropertyInfo[] props = typeof(T) .GetProperties(BindingFlags.Public | BindingFlags.Instance); // Build columns — respect [Column] attribute for Oracle column mapping foreach (var prop in props) { var colAttr = prop.GetCustomAttribute<ColumnAttribute>(); var colName = colAttr?.Name ?? prop.Name.ToUpperInvariant(); var colType = Nullable.GetUnderlyingType(prop.PropertyType) ?? prop.PropertyType; table.Columns.Add(colName, colType); } // Fill rows — DBNull.Value for null properties so Oracle stores NULL foreach (var item in items) { var values = props .Select(p => p.GetValue(item) ?? DBNull.Value) .ToArray(); table.Rows.Add(values); } return table; } } // ── Usage with default names (property name uppercased) ───────── DataTable dt = DataTableConverter.ToDataTable(employees); using var bulk = new OracleBulkCopy(conn) { DestinationTableName = "EMPLOYEES", BatchSize = 5000 }; await bulk.WriteToServerAsync(dt); // ── Usage with [Column] attribute ──────────────────────────────── // public record Employee( // [Column("EMP_ID")] int Id, // [Column("EMP_NAME")] string Name, // decimal Salary // → falls back to "SALARY" // );
Manual DataTable (explicit control)
// When you need to rename columns, transform values, or map to a different schema var dt = new DataTable(); dt.Columns.Add("EMP_ID", typeof(int)); // renamed from "ID" dt.Columns.Add("EMP_NAME", typeof(string)); // renamed from "NAME" dt.Columns.Add("DEPT_CODE",typeof(string)); // uppercased transform dt.Columns.Add("SALARY", typeof(decimal)); foreach (var e in employees) dt.Rows.Add( e.Id, e.Name.Trim(), // clean whitespace e.Dept?.ToUpperInvariant() ?? "", // transform nullable e.Salary ); // Map DataTable column names → Oracle table column names // Required when they differ (EMP_ID → ID in EMPLOYEES table) using var bulk = new OracleBulkCopy(conn) { DestinationTableName = "EMPLOYEES" }; bulk.ColumnMappings.Add("EMP_ID", "ID"); bulk.ColumnMappings.Add("EMP_NAME", "NAME"); bulk.ColumnMappings.Add("DEPT_CODE","DEPT"); bulk.ColumnMappings.Add("SALARY", "SALARY"); await bulk.WriteToServerAsync(dt);
For truly massive lists, loading everything into a DataTable first doubles memory usage. An IDataReader wrapper streams rows lazily — OracleBulkCopy reads one at a time without materializing the whole set.
/// Wraps IEnumerable<T> as IDataReader so OracleBulkCopy /// can stream rows without a full DataTable in memory. public class EnumerableDataReader<T> : IDataReader { private readonly IEnumerator<T> _enumerator; private readonly (string Name, Func<T, object> Get)[] _columns; public EnumerableDataReader( IEnumerable<T> source, params (string, Func<T, object>)[] columns) { _enumerator = source.GetEnumerator(); _columns = columns; } public int FieldCount => _columns.Length; public bool Read() => _enumerator.MoveNext(); public object GetValue(int i) => _columns[i].Get(_enumerator.Current) ?? DBNull.Value; public string GetName(int i) => _columns[i].Name; public void Dispose() => _enumerator.Dispose(); // ... other IDataReader members return NotImplementedException } // ── Usage ───────────────────────────────────────────────────────── var reader = new EnumerableDataReader<Employee>(employees, ("ID", e => e.Id), ("NAME", e => e.Name), ("DEPT", e => e.Dept), ("SALARY", e => e.Salary) ); using var bulk = new OracleBulkCopy(conn) { DestinationTableName = "EMPLOYEES", BatchSize = 10_000 }; await bulk.WriteToServerAsync(reader); // streams row by row, low memory!
IDataReader approach is the most memory-efficient option. For 1M+ rows, this is the only approach that won't cause OutOfMemoryException.Custom Type Handler for Oracle Dates
using Dapper; using Oracle.ManagedDataAccess.Types; public class OracleDateHandler : SqlMapper.TypeHandler<DateTime> { // Called when writing DateTime → Oracle parameter public override void SetValue(IDbDataParameter parameter, DateTime value) => parameter.Value = new OracleDate(value); // Called when reading Oracle DATE → DateTime public override DateTime Parse(object value) => ((OracleDate)value).Value; } // ── Register in Program.cs (.NET 8 minimal hosting) ────────────── // Call this ONCE before any Dapper queries run var builder = WebApplication.CreateBuilder(args); SqlMapper.AddTypeHandler(new OracleDateHandler()); // ← before builder.Build() var app = builder.Build();
Batch Size Strategy — Chunked Dapper inserts
public async Task ChunkedBulkInsertAsync( List<Employee> employees, int batchSize = 1000) { await using var conn = db.Create(); await conn.OpenAsync(); await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { for (int i = 0; i < employees.Count; i += batchSize) { // Materialize to List so Count doesn't re-enumerate the LINQ chain var batch = employees.Skip(i).Take(batchSize).ToList(); await conn.ExecuteAsync( "INSERT INTO EMPLOYEES (ID,NAME,DEPT,SALARY) VALUES (:Id,:Name,:Dept,:Salary)", batch, transaction: tx); Console.WriteLine($"Batch {i/batchSize + 1}: inserted {batch.Count} rows"); } await tx.CommitAsync(); } catch { await tx.RollbackAsync(); throw; } }
FORALL is Oracle PL/SQL's native bulk DML statement. Instead of a regular FOR loop that calls INSERT/UPDATE/DELETE one row at a time,
FORALL sends the entire collection to the SQL engine in one batch.
You call it from .NET by invoking the stored procedure that contains it.
Part 1 — The PL/SQL Side
First, understand what FORALL looks like in a stored procedure.
Basic FORALL — simple collection
-- Step 1: Define a collection type (array of numbers, array of strings, etc.) CREATE OR REPLACE TYPE t_id_list AS TABLE OF NUMBER; CREATE OR REPLACE TYPE t_name_list AS TABLE OF VARCHAR2(100); CREATE OR REPLACE TYPE t_salary_list AS TABLE OF NUMBER(10,2); -- Step 2: Stored procedure that receives collections and uses FORALL CREATE OR REPLACE PROCEDURE bulk_insert_employees ( p_ids IN t_id_list, p_names IN t_name_list, p_salaries IN t_salary_list ) AS BEGIN -- FORALL replaces: FOR i IN 1..p_ids.COUNT LOOP INSERT... END LOOP; -- It sends ALL rows to the SQL engine in ONE operation. FORALL i IN 1 .. p_ids.COUNT INSERT INTO EMPLOYEES (ID, NAME, SALARY) VALUES (p_ids(i), p_names(i), p_salaries(i)); COMMIT; END; /
FORALL i IN 1 .. p_ids.COUNT — i is the index variable, 1 .. COUNT is the range (all elements).The DML statement runs once per element, but all iterations are sent to SQL engine as a single batch.
p_ids(i) — Oracle collections use parentheses, not brackets.
FORALL variants
-- 1. Range: process all elements FORALL i IN 1 .. p_ids.COUNT INSERT INTO EMPLOYEES (ID) VALUES (p_ids(i)); -- 2. Range: process a subset (rows 501 to 1000) FORALL i IN 501 .. 1000 INSERT INTO EMPLOYEES (ID) VALUES (p_ids(i)); -- 3. INDICES OF: skip deleted/null elements (sparse collections) FORALL i IN INDICES OF p_ids UPDATE EMPLOYEES SET SALARY = p_salaries(i) WHERE ID = p_ids(i); -- 4. VALUES OF: use another collection as the index list -- l_index_list is a PLS_INTEGER collection of indices into p_ids -- (must be declared separately, e.g. TYPE t_idx_list IS TABLE OF PLS_INTEGER) FORALL i IN VALUES OF l_index_list DELETE FROM EMPLOYEES WHERE ID = p_ids(i); -- 5. SAVE EXCEPTIONS: don't stop on first error, collect all errors FORALL i IN 1 .. p_ids.COUNT SAVE EXCEPTIONS INSERT INTO EMPLOYEES (ID, NAME) VALUES (p_ids(i), p_names(i)); -- How many rows each DML affected (populated after FORALL) FOR i IN 1 .. p_ids.COUNT LOOP DBMS_OUTPUT.PUT_LINE('Row ' || i || ': ' || SQL%BULK_ROWCOUNT(i) || ' rows affected'); END LOOP;
BULK COLLECT — the SELECT counterpart
DECLARE TYPE t_emp_ids IS TABLE OF EMPLOYEES.ID%TYPE; TYPE t_emp_names IS TABLE OF EMPLOYEES.NAME%TYPE; l_ids t_emp_ids; l_names t_emp_names; -- Cursor for the chunked LIMIT example below CURSOR l_cur IS SELECT ID, NAME FROM EMPLOYEES WHERE DEPT = 'Engineering'; BEGIN -- Pattern 1: BULK COLLECT without LIMIT (loads ALL rows into memory) -- Only safe for small-to-medium result sets (< ~100k rows) SELECT ID, NAME BULK COLLECT INTO l_ids, l_names FROM EMPLOYEES WHERE DEPT = 'Engineering'; -- Now process with FORALL (e.g. copy to archive table) FORALL i IN 1 .. l_ids.COUNT INSERT INTO EMP_ARCHIVE (ID, NAME, ARCHIVED_AT) VALUES (l_ids(i), l_names(i), SYSDATE); -- Pattern 2: BULK COLLECT with LIMIT = chunked fetch (memory-safe) -- Use for large result sets — fetch + process 1,000 rows at a time OPEN l_cur; LOOP FETCH l_cur BULK COLLECT INTO l_ids, l_names LIMIT 1000; EXIT WHEN l_ids.COUNT = 0; FORALL i IN 1 .. l_ids.COUNT INSERT INTO EMP_ARCHIVE (ID, NAME, ARCHIVED_AT) VALUES (l_ids(i), l_names(i), SYSDATE); END LOOP; CLOSE l_cur; COMMIT; END; /
Part 2 — Calling FORALL Procedures from .NET
To pass a List<T> into an Oracle stored procedure that uses FORALL, you need Oracle UDT (User Defined Type) parameters. Here are two strategies.
Strategy A — Separate array parameters (simplest)
Pass each column as a separate OracleDbType.Array parameter. Works with the simple collection types we defined above.
public async Task CallBulkInsertProcAsync(List<Employee> employees) { // Decompose List<Employee> into typed arrays int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); decimal[] salaries = employees.Select(e => e.Salary).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); await using var cmd = conn.CreateCommand(); cmd.CommandText = "bulk_insert_employees"; cmd.CommandType = CommandType.StoredProcedure; // ArrayBindCount tells the driver: these are array params cmd.ArrayBindCount = employees.Count; // Each parameter maps to a collection parameter in the PL/SQL proc cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); cmd.Parameters.Add("p_salaries", OracleDbType.Decimal, salaries, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); // Oracle receives arrays → PL/SQL proc uses FORALL internally Console.WriteLine($"Sent {employees.Count} objects to FORALL proc"); }
Strategy B — Pass as UDT Object (strongly typed)
Define an Oracle Object Type for a single row, then pass a table of those objects. This is the cleanest approach for complex objects.
-- 1. Define a row type (like a struct) CREATE OR REPLACE TYPE t_employee_obj AS OBJECT ( ID NUMBER(10), NAME VARCHAR2(100), SALARY NUMBER(10,2) ); -- 2. Define a table of that row type (like List<EmployeeObj>) CREATE OR REPLACE TYPE t_employee_list AS TABLE OF t_employee_obj; -- 3. Procedure that takes the whole list CREATE OR REPLACE PROCEDURE bulk_insert_emp_udt ( p_employees IN t_employee_list ) AS BEGIN -- Access properties of each object with dot notation FORALL i IN 1 .. p_employees.COUNT INSERT INTO EMPLOYEES (ID, NAME, SALARY) VALUES ( p_employees(i).ID, p_employees(i).NAME, p_employees(i).SALARY ); COMMIT; END; /
IOracleCustomType.
You must implement IOracleCustomType and IOracleCustomTypeFactory
on a C# class that maps to the Oracle OBJECT type. This is significant boilerplate.
In practice, Strategy A (separate array parameters) is strongly preferred —
it achieves the same FORALL performance with far less complexity.
Use UDTs only when you are constrained to call an existing proc that takes a UDT.
// ── Correct approach for Oracle UDT parameters ──────────────────
// Oracle UDTs require a custom mapping class. You cannot pass a plain
// object[] or OracleString[] — the driver won't know how to serialize it.
// The pattern is:
//
// 1. Implement IOracleCustomType on a C# class (e.g. EmployeeUdt)
// 2. Implement IOracleCustomTypeFactory on a factory class
// 3. Decorate with [OracleCustomTypeMapping("SCHEMA.T_EMPLOYEE_OBJ")]
// 4. Pass an array of EmployeeUdt[] as OracleDbType.Array
//
// Because this requires ~100+ lines of mapping boilerplate PER type,
// the recommended alternative for new code is Strategy A (array params).
// See: https://docs.oracle.com/database/121/ODPNT/featUDTs.htm
// ── Recommended alternative: use Strategy A (separate arrays) ────
// Same Oracle-side FORALL performance, zero custom type boilerplate:
cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input);
cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input);
cmd.Parameters.Add("p_salaries", OracleDbType.Decimal, salaries, ParameterDirection.Input);
SAVE EXCEPTIONS — don't stop on first error
By default, FORALL stops at the first DML error. SAVE EXCEPTIONS continues processing and collects all errors — you inspect them at the end.
CREATE OR REPLACE PROCEDURE bulk_insert_safe ( p_ids IN t_id_list, p_names IN t_name_list ) AS bulk_errors EXCEPTION; PRAGMA EXCEPTION_INIT(bulk_errors, -24381); -- ORA-24381: error(s) in array DML BEGIN FORALL i IN 1 .. p_ids.COUNT SAVE EXCEPTIONS INSERT INTO EMPLOYEES (ID, NAME) VALUES (p_ids(i), p_names(i)); COMMIT; EXCEPTION WHEN bulk_errors THEN -- SQL%BULK_EXCEPTIONS holds all the errors FOR i IN 1 .. SQL%BULK_EXCEPTIONS.COUNT LOOP DBMS_OUTPUT.PUT_LINE( 'Error at index ' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX || ': ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE) ); END LOOP; COMMIT; -- commit the SUCCESSFUL rows, skip the bad ones END; /
ORA-24381 in PL/SQL and COMMIT inside the exception block, successful rows are saved and failed rows are skipped. This is a partial-commit pattern — make sure this is your intended behavior. If you need all-or-nothing, don't use SAVE EXCEPTIONS and let the procedure fail on the first error instead.FOR vs FORALL — Side-by-Side Comparison
-- ❌ SLOW: Regular FOR loop — context switch on every iteration FOR i IN 1 .. p_ids.COUNT LOOP INSERT INTO EMPLOYEES (ID, NAME, SALARY) VALUES (p_ids(i), p_names(i), p_salaries(i)); -- PL/SQL engine → SQL engine (switch) — repeated N times END LOOP; -- ✅ FAST: FORALL — single context switch for all rows FORALL i IN 1 .. p_ids.COUNT INSERT INTO EMPLOYEES (ID, NAME, SALARY) VALUES (p_ids(i), p_names(i), p_salaries(i)); -- PL/SQL engine sends ALL rows → SQL engine at once -- Key restriction: FORALL body can only be a SINGLE DML statement. -- You cannot put IF/ELSIF or multiple statements inside FORALL. -- If you need conditional logic, filter your collection BEFORE FORALL.
BEGIN ... FORALL ... END; block
as a PL/SQL anonymous block string directly in your C# code and execute it via ADO.NET.
This is a legitimate and often better approach in many real-world scenarios.
Stored Proc vs Inline Anonymous Block — When to Choose
CommandType.Text can execute not just a single SQL statement but a full PL/SQL anonymous block — meaning BEGIN ... END; with FORALL inside. ADO.NET sends it to Oracle, Oracle compiles and executes it in one round-trip. You get all the power of FORALL without a stored procedure.
Pattern 1 — FORALL anonymous block with Array Binding
The correct pattern: use ArrayBindCount to send arrays, then write the PL/SQL block to use those arrays directly with FORALL. The Oracle driver binds the arrays as Oracle associative-array parameters — the PL/SQL block receives them as collections and runs FORALL on them in one shot.
public async Task BulkInsertForallInlineAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); decimal[] salaries = employees.Select(e => e.Salary).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); await using var cmd = conn.CreateCommand(); // ── How this works ─────────────────────────────────────────────── // ArrayBindCount tells the Oracle driver: "these are array parameters". // The driver sends all rows in one network packet. // The PL/SQL anonymous block receives them as associative array bind // variables and passes them directly to FORALL — no manual copying needed. // Oracle executes the FORALL as a single bulk DML against the SQL engine. cmd.CommandText = @" BEGIN -- FORALL uses the bind arrays directly — no local collection copy needed. -- Oracle driver delivers :p_ids, :p_names, :p_salaries as associative arrays. -- INDICES OF iterates over every populated index in the bind array. FORALL i IN INDICES OF :p_ids INSERT INTO EMPLOYEES (ID, NAME, SALARY) VALUES (:p_ids(i), :p_names(i), :p_salaries(i)); COMMIT; END;"; // Set ArrayBindCount BEFORE adding parameters cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); cmd.Parameters.Add("p_salaries", OracleDbType.Decimal, salaries, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); Console.WriteLine($"FORALL inline: inserted {employees.Count} rows"); }
Pattern 2 — Conditional bulk: split in C#, two focused FORALL commands
FORALL's body must be a single DML statement — no IF/ELSE, no routing logic. The correct approach: split your collection in C# first, then fire two separate array-binding commands inside one transaction. Clean, debuggable, and correct.
FORALL body — accessing :p_flag(i) in a FOR/IF loop is invalid Oracle PL/SQL and will raise ORA-06550. Always split in C#.// EmployeeSync carries an IsNew flag to distinguish inserts from updates. public record EmployeeSync(int Id, string Name, decimal Salary, bool IsNew); public async Task ConditionalBulkAsync(List<EmployeeSync> records) { // ── Step 1: Split in C# ───────────────────────────────────────────── // This is the correct approach. FORALL allows only a single DML statement // as its body — no IF/ELSE, no routing. Do the split here in C# where // it is readable, testable, and does not require any extra Oracle syntax. var toInsert = records.Where(r => r.IsNew).ToList(); var toUpdate = records.Where(r => !r.IsNew).ToList(); await using var conn = db.Create(); await conn.OpenAsync(); await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { // ── Step 2: FORALL INSERT for new records ────────────────────────── if (toInsert.Count > 0) { int[] insIds = toInsert.Select(r => r.Id).ToArray(); string[] insNames = toInsert.Select(r => r.Name).ToArray(); decimal[] insSal = toInsert.Select(r => r.Salary).ToArray(); await using var insCmd = conn.CreateCommand(); insCmd.Transaction = tx; insCmd.CommandText = @" BEGIN FORALL i IN INDICES OF :p_ids INSERT INTO EMPLOYEES (ID, NAME, SALARY) VALUES (:p_ids(i), :p_names(i), :p_sal(i)); END;"; insCmd.ArrayBindCount = toInsert.Count; insCmd.Parameters.Add("p_ids", OracleDbType.Int32, insIds, ParameterDirection.Input); insCmd.Parameters.Add("p_names", OracleDbType.Varchar2, insNames,ParameterDirection.Input); insCmd.Parameters.Add("p_sal", OracleDbType.Decimal, insSal, ParameterDirection.Input); await insCmd.ExecuteNonQueryAsync(); } // ── Step 3: FORALL UPDATE for existing records ───────────────────── if (toUpdate.Count > 0) { int[] updIds = toUpdate.Select(r => r.Id).ToArray(); string[] updNames = toUpdate.Select(r => r.Name).ToArray(); decimal[] updSal = toUpdate.Select(r => r.Salary).ToArray(); await using var updCmd = conn.CreateCommand(); updCmd.Transaction = tx; updCmd.CommandText = @" BEGIN FORALL i IN INDICES OF :p_ids UPDATE EMPLOYEES SET NAME = :p_names(i), SALARY = :p_sal(i) WHERE ID = :p_ids(i); END;"; updCmd.ArrayBindCount = toUpdate.Count; updCmd.Parameters.Add("p_ids", OracleDbType.Int32, updIds, ParameterDirection.Input); updCmd.Parameters.Add("p_names", OracleDbType.Varchar2, updNames,ParameterDirection.Input); updCmd.Parameters.Add("p_sal", OracleDbType.Decimal, updSal, ParameterDirection.Input); await updCmd.ExecuteNonQueryAsync(); } await tx.CommitAsync(); // inserts + updates commit atomically } catch { await tx.RollbackAsync(); throw; } }
Pattern 3 — FORALL inline with SAVE EXCEPTIONS (error collection)
public async Task BulkInsertWithErrorCollectionAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); await using var cmd = conn.CreateCommand(); cmd.CommandText = @" DECLARE bulk_errors EXCEPTION; PRAGMA EXCEPTION_INIT(bulk_errors, -24381); BEGIN FORALL i IN INDICES OF :p_ids SAVE EXCEPTIONS INSERT INTO EMPLOYEES (ID, NAME) VALUES (:p_ids(i), :p_names(i)); COMMIT; EXCEPTION WHEN bulk_errors THEN -- Log failed rows, commit the successful ones FOR i IN 1 .. SQL%BULK_EXCEPTIONS.COUNT LOOP INSERT INTO IMPORT_ERRORS (ROW_INDEX, ERROR_CODE, LOGGED_AT) VALUES ( SQL%BULK_EXCEPTIONS(i).ERROR_INDEX, SQL%BULK_EXCEPTIONS(i).ERROR_CODE, SYSDATE ); END LOOP; COMMIT; -- commit successes + error log together END;"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); // Errors were logged to IMPORT_ERRORS table by the PL/SQL block itself }
Decision Guide — Inline Block vs Stored Procedure
✅ USE INLINE WHEN: · Logic lives only in this .NET application · You want Git to be the single source of truth for all logic · Rapid development — no DBA deployment cycle needed · One-off data migration / import script · Need conditional routing (INSERT some, UPDATE others) in same batch · Prototyping before deciding if a stored proc is warranted ✅ USE STORED PROCEDURE WHEN: · Same bulk logic is called from multiple apps / services · DBA team manages and owns the database-side logic · Security model requires EXECUTE privilege, not direct table access · Oracle's plan caching for the specific proc body is critical · You want the logic to run even if the .NET app is replaced
CREATE OR REPLACE PROCEDURE ... AS BEGIN ... END;.
The foundation. Every bulk operation should look like this. Always cast to OracleTransaction and always rollback in catch.
public async Task SafeBulkOperationAsync(List<Employee> employees) { await using var conn = db.Create(); await conn.OpenAsync(); await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { // Operation 1: Bulk insert await using var insertCmd = conn.CreateCommand(); insertCmd.Transaction = tx; insertCmd.CommandText = "INSERT INTO EMPLOYEES (ID,NAME) VALUES (:Id,:Name)"; insertCmd.ArrayBindCount = employees.Count; insertCmd.Parameters.Add("Id", OracleDbType.Int32, employees.Select(e=>e.Id).ToArray(), ParameterDirection.Input); insertCmd.Parameters.Add("Name", OracleDbType.Varchar2, employees.Select(e=>e.Name).ToArray(), ParameterDirection.Input); await insertCmd.ExecuteNonQueryAsync(); // Operation 2: Audit log — same transaction await using var auditCmd = conn.CreateCommand(); auditCmd.Transaction = tx; auditCmd.CommandText = "INSERT INTO AUDIT_LOG (EVENT,ROW_COUNT,LOGGED_AT) VALUES ('BULK_INSERT',:Count,SYSDATE)"; auditCmd.Parameters.Add("Count", employees.Count); await auditCmd.ExecuteNonQueryAsync(); await tx.CommitAsync(); // ✅ both succeed together } catch (OracleException ex) { await tx.RollbackAsync(); // ❌ both roll back together throw new InvalidOperationException($"Oracle error {ex.Number}: {ex.Message}", ex); } catch { await tx.RollbackAsync(); throw; } }
Stop copy-pasting the try/catch/rollback pattern. Wrap it once as a generic higher-order function that accepts any async work as a Func<>. This is the cleanest reusable pattern for ADO.NET + Dapper.
// .NET 8 primary constructor — inject DbConnectionFactory (not raw string) // DbConnectionFactory wraps IOptions<DatabaseOptions> and creates connections. public class TransactionHelper(DbConnectionFactory db) { /// <summary> /// Execute any async work inside an Oracle transaction. /// Commits on success, rolls back on any exception. /// </summary> /// <param name="work"> /// Your bulk operation. Receives (connection, transaction) — use both. /// </param> public async Task ExecuteInTransactionAsync( Func<OracleConnection, OracleTransaction, Task> work) { await using var conn = db.Create(); await conn.OpenAsync(); await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { await work(conn, tx); // ← your code runs here await tx.CommitAsync(); } catch { await tx.RollbackAsync(); throw; } } /// Overload that also returns a result from the transaction public async Task<T> ExecuteInTransactionAsync<T>( Func<OracleConnection, OracleTransaction, Task<T>> work) { await using var conn = db.Create(); await conn.OpenAsync(); await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { T result = await work(conn, tx); await tx.CommitAsync(); return result; } catch { await tx.RollbackAsync(); throw; } } } // Register in Program.cs: // builder.Services.AddSingleton<TransactionHelper>();
Usage — bulk insert + audit in one clean call
// Inject TransactionHelper via DI public class EmployeeService(TransactionHelper txHelper) { public async Task SyncEmployeesAsync(List<Employee> employees) { await txHelper.ExecuteInTransactionAsync(async (conn, tx) => { // ── Bulk insert ────────────────────────────────────── await using var cmd = conn.CreateCommand(); cmd.Transaction = tx; cmd.CommandText = "INSERT INTO EMPLOYEES (ID,NAME,SALARY) VALUES (:Id,:Name,:Salary)"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("Id", OracleDbType.Int32, employees.Select(e=>e.Id).ToArray(), ParameterDirection.Input); cmd.Parameters.Add("Name", OracleDbType.Varchar2, employees.Select(e=>e.Name).ToArray(), ParameterDirection.Input); cmd.Parameters.Add("Salary", OracleDbType.Decimal, employees.Select(e=>e.Salary).ToArray(), ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); // ── Dapper also works — pass the transaction explicitly ── await conn.ExecuteAsync( "INSERT INTO AUDIT_LOG (EVENT,LOGGED_AT) VALUES ('SYNC',SYSDATE)", transaction: tx); // ← Dapper accepts IDbTransaction }); } // Overload with return value — e.g. get inserted IDs back public async Task<int> InsertAndCountAsync(List<Employee> employees) { return await txHelper.ExecuteInTransactionAsync<int>(async (conn, tx) => { await using var cmd = conn.CreateCommand(); cmd.Transaction = tx; cmd.CommandText = "INSERT INTO EMPLOYEES (ID,NAME) VALUES (:Id,:Name)"; cmd.ArrayBindCount = employees.Count; // ... parameters ... await cmd.ExecuteNonQueryAsync(); return employees.Count; // return value flows back through the wrapper }); } }
transaction: tx to any Dapper call and it joins the same Oracle transaction. Mix ADO.NET and Dapper freely inside the same wrapper.For larger applications, the Unit of Work pattern groups multiple repository operations under one transaction without each repository knowing about the others.
// Contract: exposes connection + transaction for repositories to share public interface IUnitOfWork : IAsyncDisposable { OracleConnection Connection { get; } OracleTransaction Transaction { get; } Task CommitAsync(); Task RollbackAsync(); }
public class OracleUnitOfWork : IUnitOfWork { public OracleConnection Connection { get; } public OracleTransaction Transaction { get; private set; } private OracleUnitOfWork(OracleConnection conn, OracleTransaction tx) { Connection = conn; Transaction = tx; } // Factory — accepts an already-created OracleConnection (from DbConnectionFactory). // Async creation is required because OpenAsync + BeginTransactionAsync are async. public static async Task<OracleUnitOfWork> CreateAsync(OracleConnection conn) { await conn.OpenAsync(); var tx = (OracleTransaction)await conn.BeginTransactionAsync(); return new OracleUnitOfWork(conn, tx); } public Task CommitAsync() => Transaction.CommitAsync(); public Task RollbackAsync() => Transaction.RollbackAsync(); public async ValueTask DisposeAsync() { await Transaction.DisposeAsync(); await Connection.DisposeAsync(); } }
// Repository accepts IUnitOfWork — doesn't manage its own transaction public class EmployeeRepository(IUnitOfWork uow) { public async Task BulkInsertAsync(List<Employee> employees) { await using var cmd = uow.Connection.CreateCommand(); cmd.Transaction = uow.Transaction; cmd.CommandText = "INSERT INTO EMPLOYEES (ID,NAME) VALUES (:Id,:Name)"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("Id", OracleDbType.Int32, employees.Select(e=>e.Id).ToArray(), ParameterDirection.Input); cmd.Parameters.Add("Name", OracleDbType.Varchar2, employees.Select(e=>e.Name).ToArray(), ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); } public async Task BulkDeleteAsync(int[] ids) { // Dapper with the same UoW transaction — IN clause needs parens for Oracle await uow.Connection.ExecuteAsync( "DELETE FROM EMPLOYEES WHERE ID IN (:Ids)", new { Ids = ids }, transaction: uow.Transaction); } } // ── Service layer ties everything together ──────────────────────── // .NET 8 primary constructor — inject DbConnectionFactory public class SyncService(DbConnectionFactory db) { public async Task SyncAsync(List<Employee> toAdd, int[] toRemove) { await using var uow = await OracleUnitOfWork.CreateAsync(db.Create()); var repo = new EmployeeRepository(uow); try { await repo.BulkInsertAsync(toAdd); // same transaction await repo.BulkDeleteAsync(toRemove); // same transaction await uow.CommitAsync(); // ✅ commit both } catch { await uow.RollbackAsync(); // ❌ roll back both throw; } } }
TransactionScope is a .NET ambient transaction that automatically enlists Oracle connections — useful when you have code that can't easily receive a connection/transaction object directly.
TransactionScope with Oracle requires Distributed Transaction Coordinator (DTC/MSDTC) for cross-connection scenarios. For same-connection operations it works fine. Avoid it if you control the code — the UoW or Func wrapper is cleaner.using System.Transactions; public async Task BulkWithScopeAsync(List<Employee> employees) { // AsyncFlowOption.Enabled = propagates scope across await points (required) using var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled); // ⚠️ Oracle.ManagedDataAccess does NOT auto-enlist in TransactionScope. // You must EITHER add Enlist=true to the connection string: // "User Id=...;...; Enlist=true;" // OR explicitly enlist after opening: await using var conn = db.Create(); await conn.OpenAsync(); conn.EnlistTransaction(Transaction.Current); // ← explicit enlistment await using var cmd = conn.CreateCommand(); cmd.CommandText = "INSERT INTO EMPLOYEES (ID,NAME) VALUES (:Id,:Name)"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("Id", OracleDbType.Int32, employees.Select(e=>e.Id).ToArray(), ParameterDirection.Input); cmd.Parameters.Add("Name", OracleDbType.Varchar2, employees.Select(e=>e.Name).ToArray(), ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); scope.Complete(); // marks for commit — if not called, auto-rollback on Dispose } // If Complete() was called → commits. If exception → rolls back automatically.
When to choose each approach
You control the connection → Use Generic Func Wrapper or UoW Multiple repositories / clean arch → Use Unit of Work pattern Quick script / simple one-off → Use Basic try/catch pattern Ambient transaction across services → Use TransactionScope (same conn only) Cross-database distributed transaction → TransactionScope + DTC (avoid if possible)
Catch OracleException specifically — it gives you the Oracle error number for fine-grained handling.
catch (OracleException ex) { switch (ex.Number) { case 1: // ORA-00001: unique constraint violated throw new DuplicateKeyException("Duplicate ID detected", ex); case 1400: // ORA-01400: cannot insert NULL into NOT NULL column throw new ArgumentNullException("Required field is null", ex); case 2291: // ORA-02291: FK constraint — parent key not found throw new InvalidOperationException("Foreign key not found", ex); case 1438: // ORA-01438: value too large for column throw new OverflowException("Data exceeds column length", ex); case 12170: // ORA-12170: TNS connect timeout case 3135: // ORA-03135: connection lost contact case 1033: // ORA-01033: ORACLE initialization in progress throw new TimeoutException("Oracle connection problem", ex); case 24381: // ORA-24381: FORALL SAVE EXCEPTIONS errors for (int i = 0; i < ex.Errors.Count; i++) Console.WriteLine($" Bulk error [{i}]: {ex.Errors[i].Message}"); throw; default: throw; } }
SAVE EXCEPTIONS, commit placement, and
how .NET wraps the transaction around the anonymous block.
Option A — Oracle commits inside the PL/SQL block
The simplest approach: the PL/SQL anonymous block manages its own COMMIT/ROLLBACK. The .NET side has no transaction — Oracle handles atomicity internally.
public async Task ForallOracleCommitAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); decimal[] salaries = employees.Select(e => e.Salary).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); // No BeginTransactionAsync here — Oracle's COMMIT inside the block owns it await using var cmd = conn.CreateCommand(); cmd.CommandText = @" BEGIN FORALL i IN INDICES OF :p_ids INSERT INTO EMPLOYEES (ID, NAME, SALARY) VALUES (:p_ids(i), :p_names(i), :p_sal(i)); COMMIT; -- Oracle commits atomically here EXCEPTION WHEN OTHERS THEN ROLLBACK; -- Oracle rolls back on any error RAISE; END;"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); cmd.Parameters.Add("p_sal", OracleDbType.Decimal, salaries, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); }
Option B — .NET owns the transaction, FORALL participates
When you need FORALL to be part of a larger .NET transaction — for example, insert employees AND write an audit record atomically — .NET begins the transaction and the FORALL block joins it. Do NOT put COMMIT inside the PL/SQL block when .NET owns the transaction.
public async Task ForallNetTransactionAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); // .NET begins the transaction await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { // ── Operation 1: FORALL bulk insert (NO COMMIT inside block!) ── await using var forallCmd = conn.CreateCommand(); forallCmd.Transaction = tx; // ← assigns FORALL to this transaction forallCmd.CommandText = @" BEGIN FORALL i IN INDICES OF :p_ids INSERT INTO EMPLOYEES (ID, NAME) VALUES (:p_ids(i), :p_names(i)); -- ⚠️ NO COMMIT here — .NET transaction controls the commit END;"; forallCmd.ArrayBindCount = employees.Count; forallCmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); forallCmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); await forallCmd.ExecuteNonQueryAsync(); // ── Operation 2: Audit log — same transaction ── await using var auditCmd = conn.CreateCommand(); auditCmd.Transaction = tx; auditCmd.CommandText = "INSERT INTO AUDIT_LOG(EVENT,ROW_COUNT,LOGGED_AT) VALUES('FORALL_INSERT',:Cnt,SYSDATE)"; auditCmd.Parameters.Add("Cnt", employees.Count); await auditCmd.ExecuteNonQueryAsync(); // ✅ .NET commits both: the FORALL inserts + the audit log await tx.CommitAsync(); } catch { // ❌ Rolls back both the FORALL inserts and the audit log await tx.RollbackAsync(); throw; } }
COMMIT and ROLLBACK from the PL/SQL block.
A COMMIT inside the block would commit the FORALL inserts immediately and break the atomicity with other .NET operations.
The only statement left in the block is the FORALL DML — .NET handles the lifecycle.
Same pattern using the generic TransactionHelper
// Reuse the TransactionHelper from the Transactions module await txHelper.ExecuteInTransactionAsync(async (conn, tx) => { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); await using var cmd = conn.CreateCommand(); cmd.Transaction = tx; // attach FORALL to the wrapper's transaction cmd.CommandText = @" BEGIN FORALL i IN INDICES OF :p_ids INSERT INTO EMPLOYEES (ID, NAME) VALUES (:p_ids(i), :p_names(i)); -- No COMMIT — TransactionHelper.CommitAsync() does it END;"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); });
FORALL SAVE EXCEPTIONS inside a .NET transaction
This is the most nuanced case. SAVE EXCEPTIONS catches row-level errors and continues — but how those interact with the .NET transaction depends on whether you commit inside the block or let .NET decide.
SAVE EXCEPTIONS and catch ORA-24381 inside the PL/SQL block, then call COMMIT inside the exception handler — the successful rows are committed immediately, before .NET gets control back. A subsequent .NET-level rollback cannot undo those rows. Decide up front: all-or-nothing (no SAVE EXCEPTIONS) or partial success (COMMIT inside the block, .NET does not roll back).
// Pattern: partial success — good rows committed, bad rows logged // .NET transaction is NOT used here because SAVE EXCEPTIONS commits internally public async Task<int> ForallPartialSuccessAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); // ← No .NET transaction — SAVE EXCEPTIONS block owns commit/rollback await using var cmd = conn.CreateCommand(); cmd.CommandText = @" DECLARE bulk_errors EXCEPTION; PRAGMA EXCEPTION_INIT(bulk_errors, -24381); v_failed NUMBER := 0; BEGIN FORALL i IN INDICES OF :p_ids SAVE EXCEPTIONS INSERT INTO EMPLOYEES (ID, NAME) VALUES (:p_ids(i), :p_names(i)); COMMIT; -- all rows succeeded EXCEPTION WHEN bulk_errors THEN v_failed := SQL%BULK_EXCEPTIONS.COUNT; FOR i IN 1 .. v_failed LOOP INSERT INTO IMPORT_ERRORS(ROW_INDEX, ERROR_CODE, LOGGED_AT) VALUES(SQL%BULK_EXCEPTIONS(i).ERROR_INDEX, SQL%BULK_EXCEPTIONS(i).ERROR_CODE, SYSDATE); END LOOP; COMMIT; -- commit good rows + error log; bad rows were never inserted :v_err_count := v_failed; END;"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); var errParam = new OracleParameter("v_err_count", OracleDbType.Int32) { Direction = ParameterDirection.Output, Value = 0 }; cmd.Parameters.Add(errParam); await cmd.ExecuteNonQueryAsync(); return Convert.ToInt32(errParam.Value); // how many rows failed } // Caller knows how many rows were rejected: int failCount = await repo.ForallPartialSuccessAsync(employees); if (failCount > 0) logger.LogWarning("{Count} rows failed — see IMPORT_ERRORS table", failCount);
All-or-nothing with FORALL (no SAVE EXCEPTIONS)
// If ANY row fails → everything rolls back. .NET owns the transaction. public async Task ForallAllOrNothingAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); await using var conn = db.Create(); await conn.OpenAsync(); await using var tx = (OracleTransaction)await conn.BeginTransactionAsync(); try { await using var cmd = conn.CreateCommand(); cmd.Transaction = tx; cmd.CommandText = @" BEGIN -- No SAVE EXCEPTIONS → first failure throws, .NET catches it FORALL i IN INDICES OF :p_ids INSERT INTO EMPLOYEES (ID) VALUES (:p_ids(i)); -- No COMMIT → .NET transaction commits below END;"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); // throws OracleException if any row fails await tx.CommitAsync(); // ✅ all rows succeeded } catch { await tx.RollbackAsync(); // ❌ zero rows inserted throw; } }
FORALL inside Unit of Work
When your architecture uses the Unit of Work pattern, FORALL participates the same way as any other command — assign uow.Transaction, omit COMMIT from the block.
public class EmployeeRepository(IUnitOfWork uow) { public async Task ForallInsertAsync(List<Employee> employees) { int[] ids = employees.Select(e => e.Id).ToArray(); string[] names = employees.Select(e => e.Name).ToArray(); await using var cmd = uow.Connection.CreateCommand(); cmd.Transaction = uow.Transaction; // ← key line cmd.CommandText = @" BEGIN FORALL i IN INDICES OF :p_ids INSERT INTO EMPLOYEES (ID, NAME) VALUES (:p_ids(i), :p_names(i)); -- UoW.CommitAsync() handles the commit END;"; cmd.ArrayBindCount = employees.Count; cmd.Parameters.Add("p_ids", OracleDbType.Int32, ids, ParameterDirection.Input); cmd.Parameters.Add("p_names", OracleDbType.Varchar2, names, ParameterDirection.Input); await cmd.ExecuteNonQueryAsync(); } } // Service layer: FORALL insert + Dapper audit — one atomic commit // SyncService receives DbConnectionFactory via DI (.NET 8 primary constructor) public class SyncService(DbConnectionFactory db) { public async Task SyncAsync(List<Employee> employees) { await using var uow = await OracleUnitOfWork.CreateAsync(db.Create()); var repo = new EmployeeRepository(uow); try { await repo.ForallInsertAsync(employees); // FORALL, no COMMIT inside await uow.Connection.ExecuteAsync( // Dapper, same transaction "INSERT INTO AUDIT_LOG(EVENT,LOGGED_AT) VALUES('SYNC',SYSDATE)", transaction: uow.Transaction); await uow.CommitAsync(); // ✅ atomic: FORALL + audit } catch { await uow.RollbackAsync(); throw; } // ❌ both rolled back } }
Decision matrix — FORALL + Transaction combinations
Scenario → Pattern ───────────────────────────────────────────────────────────────────── FORALL only, self-contained → COMMIT inside PL/SQL block FORALL + other .NET ops, all-or-nothing → .NET owns tx, NO COMMIT in block FORALL, partial success ok (log failures) → SAVE EXCEPTIONS + COMMIT in handler FORALL in repository with Unit of Work → cmd.Transaction = uow.Transaction, no COMMIT FORALL across multiple repos / services → Unit of Work pattern ⚠️ NEVER mix: .NET tx + COMMIT inside block The block's COMMIT fires before .NET can roll back — you lose atomicity.
| Method | 10k rows est. | Speed | Use Case |
|---|---|---|---|
| Row-by-row loop | ~30–90s | ❌ Never | |
| Dapper ExecuteAsync (list) | ~5–15s | Small batches, convenience | |
| Dapper INSERT ALL | ~2–5s | Medium batches (<1000) | |
| ADO.NET Array Binding | ~0.5–2s | Large parameterized DML | |
| FORALL (inline / stored proc) | ~0.4–1.5s | PL/SQL-side bulk DML, conditional routing | |
| OracleBulkCopy | ~0.1–0.5s | ETL, data migration, CSV load | |
| Temp Table + MERGE/UPDATE | ~0.3–1s | Complex upsert / bulk update |
Tuning Tips
INSERT /*+ APPEND */ INTO ... tells Oracle to use direct-path write, skipping buffer cache. Even faster for large loads.Min Pool Size=5;Max Pool Size=20NOLOGGING can reduce redo log overhead significantly for bulk staging operations.Decision Tree
Your data shape?
├─ List<T> objects → Dapper (direct) OR decompose to arrays (ADO.NET)
└─ Raw arrays → Array Binding directly
Need to INSERT large data?
├─ <1,000 rows → Dapper ExecuteAsync (List<T> directly)
├─ 1k–100k rows → ADO.NET Array Binding (project List to arrays)
├─ 100k+ rows → OracleBulkCopy (List → DataTable or IDataReader)
├─ CSV / ETL file → OracleBulkCopy + IDataReader (streaming)
└─ Via stored proc → FORALL in PL/SQL + Array Binding from .NET
Need to UPDATE many rows?
├─ Same value for all → Single UPDATE WHERE condition
├─ Different per row → ADO.NET Array Binding
└─ Very large set → Temp Table + UPDATE JOIN
Need to UPSERT (insert or update)?
└─ Always → Oracle MERGE statement
├─ Small set → Dapper MERGE with :params
└─ Large set → BulkCopy to GTT + MERGE
Need to DELETE many rows?
├─ By condition → Single DELETE WHERE
├─ By IDs (<1000) → Dapper IN (:Ids)
└─ By IDs (>1000) → Array Binding or chunk loop
Parameter Syntax Reminder
Key Classes
Critical Rules to Remember
cmd.ArrayBindCount = list.Count before adding any parameters.