Feed aggregator

APEX Message box label

Tom Kyte - 13 hours 22 min ago
Is it possible to change the labels of the confirm dialog buttons from ?Cancel/Ok? to ?No/Yes? in APEX?
Categories: DBA Blogs

Is there an Oracle document that has a checklist to be able to answer whether database server will handle "peak" load

Tom Kyte - 13 hours 22 min ago
Greetings, A question from the client that comes up every few years is to predict if the Oracle database server will be able to handle a new application's peak load. Instead of trying to think of all that needs to be considered on the fly, it would be great if there was an Oracle document that had a checklist with all of the questions that we must answer so that we can give the client a definite answer of yes we can predict if x,y and z or performed. I know that in most cases, this will be nearly impossible to answer as it will take too much time to answer and we can't control the variables for other apps that share the same resources like database, network, SAN, etc. For instance, usually the network and SAN are shared with the database server so we will need to get peak loads of all the other applications plus the expected max throughput for the network and SAN. Thanks for your help, John
Categories: DBA Blogs

RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece

Hemant K Chitale - Wed, 2021-05-05 09:40

 You are attempting to restore a database to another server.  

So, you have verified that you have controlfile and datafile backups on the source server  :



RMAN> list backup of controlfile;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
51 Full 11.52M DISK 00:00:01 20-FEB-21
BP Key: 51 Status: AVAILABLE Compressed: NO Tag: TAG20210220T114245
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
Control File Included: Ckp SCN: 1093419 Ckp time: 20-FEB-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
55 Full 11.52M DISK 00:00:02 04-MAY-21
BP Key: 55 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232054
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
Control File Included: Ckp SCN: 1126526 Ckp time: 04-MAY-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
56 Full 11.48M DISK 00:00:01 04-MAY-21
BP Key: 56 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232851
Piece Name: /home/oracle/controlfile.bak
Control File Included: Ckp SCN: 1126757 Ckp time: 04-MAY-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
57 Full 11.52M DISK 00:00:02 04-MAY-21
BP Key: 57 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232853
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
Control File Included: Ckp SCN: 1126766 Ckp time: 04-MAY-21

RMAN>


You have copied the backups to the target, new, server and attempt to restore :

oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed May 5 22:27:26 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database (not started)

RMAN> startup nomount;

Oracle instance started

Total System Global Area 1207958960 bytes

Fixed Size 8895920 bytes
Variable Size 318767104 bytes
Database Buffers 872415232 bytes
Redo Buffers 7880704 bytes

RMAN> restore controlfile from '/home/oracle/controlfile.bak';

Starting restore at 05-MAY-21
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 05/05/2021 22:27:47
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>ls /home/oracle/controlfile.bak
/home/oracle/controlfile.bak
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
oracle19c>


So, why do you get the RMAN-06172 error ?  All the controlfile backups, including the manual backup to /home/oracle/controlfile.bak and the three autobackups, one from February 2021 and two from 04-May-2021 are available.

oracle19c>oerr rman 6172
6172, 1, "no AUTOBACKUP found or specified handle is not a valid copy or piece"
// *Cause: A restore could not proceed because no AUTOBACKUP was found or
// specified handle is not a valid copy or backup piece.
// In case of restore from AUTOBACKUP, it may be the case that a
// backup exists, but it does not satisfy the criteria specified in
// the user's restore operands.
// In case of restore from handle, it may be the handle is not a
// backup piece or control file copy. In may be that it does not
// exist.
// *Action: Modify AUTOBACKUP search criteria or verify the handle.
oracle19c>
oracle19c>ls -l /home/oracle/controlfile.bak
-rw-r-----. 1 root root 12058624 May 4 23:28 /home/oracle/controlfile.bak
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
-rw-r-----. 1 root root 12091392 Feb 20 11:42 /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
-rw-r-----. 1 root root 12091392 May 4 23:20 /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
-rw-r-----. 1 root root 12091392 May 4 23:28 /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
oracle19c>


You get the "error" message that there are no AUTOBACKUPs because the "oracle19c" account is unable to actually *read* those pieces.  It can list them using "ls" because it has permission to read the OS folders containing them, but it does no have permission to read the files owned by root without having granted read permission.

So, before you start wondering about your AUTOBACKUP configuration or search criteria specification like "RESTORE CONTROLFILE FROM AUTOBACKUP MAXDAYS 30",  check if the backup pieces are readable.


Categories: DBA Blogs

Celery Distributed Task Queue with FastAPI for Machine Learning

Andrejus Baranovski - Wed, 2021-05-05 06:43
This sample app demonstrates how to implement Celery distributed task queues on top of RabbitMQ broker for Machine Learning model training and data processing. We are using TensorFlow in this example to train the model. API request comes through FastAPI and it is being processed asynchronously by Celery. There is a separate API endpoint to check task status. Multiple requests can be initiated and processed at the same time in parallel. Celery tasks can be monitored using Flower monitoring tool.

 

Avoiding overlap values...

Tom Kyte - Tue, 2021-05-04 12:26
Hai Mr Tam, U said U have a 'trick' for the following problem. It couldbe nice if u tell me that..thanks. A form(5.0) with tabular style is displayed like below.. to from discount -- --- ----- 10 40 1.5 50 65 2.5 70 90 1.2 . . . . 60 99 ----> should not be allowed. 65 80 ----> should not be allowed. Is there a way . . . . But I would like to stop OVERLAPPING range like above shown with arrow marks. How can I do it. Thanks once again rgs priya
Categories: DBA Blogs

Fluctuating counts on ROWID splits using DIY parallelism.

Tom Kyte - Mon, 2021-05-03 18:26
Hi Tom, Chris, Connor and all, I've been a user of your DIY Parallel solution for many years now as a way to pull data out of large unpartitioned tables in parallel to send to other non-Oracle databases or for file system archiving. I've ran into a situation recently at my last company and now my new company where the solution is acting different. I first noticed the change at my old company when the data warehouse I was supporting was moved to an Exadata system in OCI. The version of the database stayed the same, 11.2.0.4, making the only change being the hardware/datacenter move. What happened was during a row count validation of a table export based upon rowid splits the counts didn't match what was exported. Upon further investigation I found that the row count for a given rowid split was flutuating. Upon doing one count it would return a value and then the value would change upon subsequent counts. The count didn't just go up, it would go up and down between a set of three or four different values, making getting an accurate count imposssible. The SQL I'd used to do this was of the formats: <code>SELECT COUNT(*) FROM X.XXX WHERE ROWID BETWEEN 'AAC2GBAFRAAD+c4AAP' AND 'AAC2GBAGIAAJ97wABX';</code> or <code>SELECT COUNT(*) FROM X.XXX WHERE ROWID > 'AAC2GBAFRAAD+c4AAP' AND ROWID <= 'AAC2GBAGIAAJ97wABX';</code> I can see where the counts could increment up if data is being added to the table but these were static tables and the count bounced back and forth between a few different sets of numbers. I'm now seeing this happen on other databases at my new job and I'm not sure what the cause of it is. I can't pin it down to a type of table or version or whether it's Exadata related or maybe something related to background work ASM is doing. I did a search to see if anyone else is having this occur to them without any luck. I'm seeing where lots of folks have implemented it but not where the row counts for a given split fluctuates. Do you have any idea what could be causing this and how to make it stop? It doesn't happen on all rowid splits for a table and it doesn't happen for all tables in a given database, it appears to be very random. Thanks, Russ
Categories: DBA Blogs

Is it possible to switch a service between 2 pluggable databases?

Yann Neuhaus - Sun, 2021-05-02 11:33
Introduction

In an Oracle database, services have multiple purposes:

  • identifying group of users prior authenticating them
  • enabling/disabling access to a database for particular groups
  • managing preferred nodes on a RAC configuration
  • redirecting users to the primary after a switchover/failover when using Data Guard or Dbvisit Standby
  • redirecting read only connections to Standby database with Active Guard option

A customer asked me if services could be used for switching between PDBs without modifying client’s connexion, for a very specific usage. Let’s try to find out.

Initializing the test environment

Let’s create 2 PDBs. The goal is to share a service that can alternatively run on the first PDB or on the second one:

create pluggable database JDU1 admin user JDU identified by JDU2021 ROLES=(DBA) DEFAULT TABLESPACE data
DATAFILE SIZE 1G AUTOEXTEND ON MAXSIZE 8G TEMPFILE REUSE;

create pluggable database JDU2 admin user JDU identified by JDU2021 ROLES=(DBA) DEFAULT TABLESPACE data
DATAFILE SIZE 1G AUTOEXTEND ON MAXSIZE 8G TEMPFILE REUSE;


alter pluggable database jdu1 open;

alter pluggable database jdu2 open;

Let’s check current running services on these 2 PDBs:

alter session set container=JDU1;

select name from v$active_services;
NAME
----------------------------------------------------------------
jdu1


alter session set container=JDU2;

select name from v$active_services;

NAME
----------------------------------------------------------------
jdu2
Create a “shared” service and test it

Let’s go to the first container and create the “shared” service:

alter session set container=JDU1;

execute DBMS_SERVICE.CREATE_SERVICE (service_name => 'JDU_SVC', network_name => 'JDU_SVC', failover_method => 'BASIC', failover_type => 'SELECT', failover_retries => 1800, failover_delay => 1); ​
​
exec dbms_service.start_service (service_name => 'JDU_SVC');​

select name from v$active_services;

NAME
----------------------------------------------------------------
jdu1
JDU_SVC

Now, let’s also try to create the service inside the other PDB:

alter session set container=JDU2;

execute DBMS_SERVICE.CREATE_SERVICE (service_name => 'JDU_SVC', network_name => 'JDU_SVC', failover_method => 'BASIC', failover_type => 'SELECT', failover_retries => 1800, failover_delay => 1); ​
​

*
ERROR at line 1:
ORA-44303: service name exists
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 21
ORA-06512: at "SYS.DBMS_SERVICE", line 316
ORA-06512: at line 1

It does not work because service seems to be global. Let’s try to start it:

exec dbms_service.start_service (service_name => 'JDU_SVC');​


*
ERROR at line 1:
ORA-44786: Service operation cannot be completed.
ORA-06512: at "SYS.DBMS_SERVICE", line 76
ORA-06512: at "SYS.DBMS_SERVICE", line 483
ORA-06512: at line 1

It does not work because it’s already started on the first PDB. Let’s stop it on the first PDB:

alter session set container=JDU1;

exec dbms_service.stop_service (service_name => 'JDU_SVC');​

And start it on the second PDB:

alter session set container=JDU2;

exec dbms_service.start_service (service_name => 'JDU_SVC');​

*
ERROR at line 1:
ORA-44786: Service operation cannot be completed.
ORA-06512: at "SYS.DBMS_SERVICE", line 76
ORA-06512: at "SYS.DBMS_SERVICE", line 483
ORA-06512: at line 1

No, it doesn’t work.

Let’s now remove the service from the first PDB and create and start it on the second PDB:

alter session set container=JDU1;

exec dbms_service.delete_service (service_name => 'JDU_SVC');

PL/SQL procedure successfully completed.

alter session set container=JDU2;

Session altered.

execute DBMS_SERVICE.CREATE_SERVICE (service_name => 'JDU_SVC', network_name => 'JDU_SVC', failover_method => 'BASIC', failover_type => 'SELECT', failover_retries => 1800, failover_delay => 1); ​
​
exec dbms_service.start_service (service_name => 'JDU_SVC');​

select name from v$active_services;

NAME
----------------------------------------------------------------
JDU_SVC
jdu2
Findings

A service is global and cannot exist in more than one PDB. That’s quite obvious but I thought it was possible to declare it in multiple PDB and start only in one PDB.

Note that at the CDB level, querying the CDB_SERVICES will give you an overview of services and PDB associated:

col name for a30
col pdb for a30
select name, pdb from cdb_services;

NAME                           PDB
------------------------------ ------------------------------
SYS$BACKGROUND                 CDB$ROOT
SYS$USERS                      CDB$ROOT
CDBSYR01.test.ch               CDB$ROOT
CDBSYR01XDB                    CDB$ROOT
CDBSYR01_SITE1.test.ch         CDB$ROOT
CSYR01AXDB                     CDB$ROOT
CSYR01A.test.ch                CDB$ROOT
CFRD01AXDB                     CDB$ROOT
CFRD01A.test.ch                CDB$ROOT
JDU1                           JDU1
JDU2                           JDU2
JDU_SVC                        JDU2

Last question I’ve been asking: is it possible to stop the service from the CDB? No, you’ll have to switch to the correct container to do that.

conn / as sysdba

exec dbms_service.stop_service (service_name => 'JDU_SVC');

*
ERROR at line 1:
ORA-44786: Service operation cannot be completed.
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 91
ORA-06512: at "SYS.DBMS_SERVICE", line 519
ORA-06512: at line 1
Conclusion

Service is not manageable through CDB, although being a global mechanism. If you want to use services to switch from one PDB to another one, you’ll have to script that because it’s not a feature.

But would you really use this kind of script? Not sure this would be a good idea. If your script failed for some reason, your application will use a database that may not be the expected one, without being aware. Quite dangerous in my opinion.

Cet article Is it possible to switch a service between 2 pluggable databases? est apparu en premier sur Blog dbi services.

My Posts on RMAN

Hemant K Chitale - Sat, 2021-05-01 23:05

 My series of posts on RMAN :

1. 1 : Backup Job Details

2. 2 : ArchiveLog Deletion Policy

3. 3 : The DB_UNIQUE_NAME in Backups to the FRA

4. 4 : Recovering from an Incomplete Restore

5. 4b : Recovering from an Incomplete Restore with OMF Files

6. 5 : Useful KEYWORDs and SubClauses

7. 5b : (More) Useful KEYWORDs and SubClauses

8. 5c : (Some More) Useful KEYWORDs and SubClauses

9. 6 : RETENTION POLICY and CONTROL_FILE_RECORD_KEEP_TIME

10. 7 : Recovery Through RESETLOGS -- how are the ArchiveLogs identified ?

11. 8 : Using a Recovery Catalog Schema

12. 9 : Querying the RMAN Views / Catalog

13. 10 : VALIDATE


An older series of "tips" :

14. Tips -- 1

15. Tips -- 2

16. Tips -- 3

17. Tips -- 4


Other RMAN posts not in the  above series : (not in any particular order)

18. RMAN's CATALOG command

19. RESTORE and RECOVER a NOARCHIVELOG Database, with Incremental Backups

20. RESTORE and RECOVER a NOARCHIVELOG Database, with Incremental Backups -- 2nd Post

21. Primary and Standby in the same RMAN Catalog

22. Understanding Obsolescence of RMAN Backups

23. "SET TIME ON" in RMAN

24. RMAN Backup of a Standby Database

25. RMAN Image Copy File Names

26. Verifying an RMAN Backup

27. Verifying an RMAN Backup - Part 2

28. Misinterpreting RESTORE DATABASE VALIDATE

29. RMAN Backup and Recovery for Loss of ALL Files

30. CONTROLFILE AUTOBACKUPs are OBSOLETE[d]

31.RMAN Consistent ("COLD" ?) Backup and Restore

32. Archive Log Deletion Policy with a Standby Database

33. Datafiles not Restored -- using V$DATAFILE and V$DATAFILE_HEADER

34. Read Only Tablespaces and BACKUP OPTIMIZATION


Categories: DBA Blogs

Pro*C in Oracle

Hemant K Chitale - Sat, 2021-05-01 05:48

 Oracle also ships a Pro*C Precompiler that can convert a Pro*C source file to a C source file which can then be compiled  using a C Compiler (e.g  using "gcc").  Of course, you need the Pro*C Developer Licence to use this product.

Here is a quick demo with the command line display and then the actual code below.



oracle19c>ls -ltr
total 12
-rw-r--r--. 1 oracle oinstall 2255 May 1 18:07 instancedbinfo.pc
-rwxr--r--. 1 oracle oinstall 786 May 1 18:14 Compile_my_ProC.SH
-rwxr--r--. 1 oracle oinstall 356 May 1 18:15 Run_my_ProC.SH
oracle19c>./Compile_my_ProC.SH
*****Set LD_LIBRARY_PATH
*****Set C_INCLUDE_PATH
*****PreCompile Pro*C program file

Pro*C/C++: Release 19.0.0.0.0 - Production on Sat May 1 18:15:17 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

System default option values taken from: /opt/oracle/product/19c/dbhome_1/precomp/admin/pcscfg.cfg

*****Compile using C Compiler and specifying Oracle Client library file libclntsh.so
*****Compiled files:
-rw-r--r--. 1 oracle oinstall 2255 May 1 18:07 instancedbinfo.pc
-rw-r--r--. 1 oracle oinstall 0 May 1 18:15 instancedbinfo.lis
-rw-r--r--. 1 oracle oinstall 11875 May 1 18:15 instancedbinfo.c
-rwxr-xr-x. 1 oracle oinstall 14424 May 1 18:15 instancedbinfo
oracle19c>
oracle19c>
oracle19c>
oracle19c>./Run_my_ProC.SH
*****Set LD_LIBRARY_PATH
*****Set Connection String
*****Execute the program
Connected to ORACLE
At ORCLCDB which is on oracle-19c-vagrant running 19.0.0.0.0 and is OPEN, started at 01-MAY-21 17:54:52
This is ORCLPDB1 database running in READ WRITE mode since 01-MAY-21 05.55.21.573 PM +08:00

oracle19c>


The file "instancedbinfo.pc" is my Pro*C source code.
I Precompile it using the "proc" precompiler into "instancedbinfo.c".  Any compilation errors would have been logged into "instancedbinfo.lis"
Then, the same script "Compile_my_ProC.SH" compiles the C program source code into an executable "instancedbinfo" using "gcc"

Finally, I use "Run_my_ProC.SH" to execute the file "instancedbinfo"  (which is now an executable) and the execution displays information about the Pluggable database it is connected to.


Here is the code for the two shell scripts :


oracle19c>cat Compile_my_ProC.SH

echo "*****Set LD_LIBRARY_PATH"
LD_LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include:/usr/include/linux:/opt/oracle/product/19c/dbhome_1/precom/lib:/opt/oracle/product/19c/dbhome_1/lib
export LD_LIBRARY_PATH


echo "*****Set C_INCLUDE_PATH"
C_INCLUDE_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include:/usr/include/linux:/opt/oracle/product/19c/dbhome_1/precom/lib:/opt/oracle/product/19c/dbhome_1/lib:/opt/oracle/product/19c/dbhome_1/precomp/public
export C_INCLUDE_PATH

echo "*****PreCompile Pro*C program file"
proc instancedbinfo.pc

echo "*****Compile using C Compiler and specifying Oracle Client library file libclntsh.so"
gcc instancedbinfo.c -o instancedbinfo -L /opt/oracle/product/19c/dbhome_1/lib -l clntsh

echo "*****Compiled files:"
ls -ltr instancedbinfo*
oracle19c>


oracle19c>cat Run_my_ProC.SH

echo "*****Set LD_LIBRARY_PATH"
LD_LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include:/usr/include/linux:/opt/oracle/product/19c/dbhome_1/precom/lib:/opt/oracle/product/19c/dbhome_1/lib
export LD_LIBRARY_PATH

echo "*****Set Connection String"
CNCTSTRING=hemant/hemant@orclpdb1
export CNCTSTRING

echo "*****Execute the program"
./instancedbinfo
oracle19c>


The Compilation script specifies the LD_LIBRARY_PATH and the Paths to the Include (.h Header) files.  
It then executes "proc"  (which is in $ORACLE_HOME/bin) to precompile the "instancedbinfo.pc" source file.
Finally, it calls "gcc" to compile the c-language source code file (generated by the Precomipler), also specifiying the client shared library file libclntsh.so  in $ORACLE_HOME/lib  (only "-l clntsh" is sufficient to identify the file name).  The compiled executable is called "instancedbinfo" with Execute Permission.

The Run script specifies the Connect-String that the executable will be reading from the environment and executes it.


Here is the code of the source Pro*C file :


oracle19c>cat instancedbinfo.pc

/* standard C includes */
#include << stdio.h >>
#include << stdlib.h >>
#include << string.h >>



/* Oracle Pro*C includes from $ORACLE_HOME/precomp/public */
#include << sqlca.h >>
#include << sqlda.h >>
#include << sqlcpr.h >>




/* my variables */
varchar MYinstanceName[16];
varchar MYhostName[64];
varchar MYversion[17];
varchar MYstatus[12];
varchar MYinstanceStartupTime[18];
varchar MYdbName[128];
varchar MYdbOpenMode[10];
varchar MYdbOpenTime[32];



/* function for error handling */
void sql_error(msg)
char msg[200];
{
char err_msg[128];
size_t buf_len, msg_len;

EXEC SQL WHENEVER SQLERROR CONTINUE;

printf("\n%s\n", msg);
buf_len = sizeof (err_msg);
sqlglm(err_msg, &buf_len, &msg_len);
printf("%.*s\n", msg_len, err_msg);

EXEC SQL ROLLBACK RELEASE;
exit(EXIT_FAILURE);
}


/* MAIIN program */
int main(argc,argv)
int argc;
char *argv[];
{

/* read Connection String from environment -- or, it could have been hardcoded here */
const char *conn = getenv("CNCTSTRING");
if (!conn) {
printf("! require CNCTSTRING env variable\n");
return (1);
}

EXEC SQL WHENEVER SQLERROR DO sql_error("ORACLE error--\n");

/* connect to targe database */
EXEC SQL CONNECT :conn ;
printf("Connected to ORACLE \n");


/* execute query and populate variables */
/* NOTE : This expects to connect to a PDB ! */
/* If the target is a Non-PDB, change references from v$pdbs to V$database */
EXEC SQL SELECT instance_name,host_name, version,
to_char(startup_time,'DD-MON-RR HH24:MI:SS'), status,
name, open_mode, to_char(open_time)
INTO :MYinstanceName, :MYhostName, :MYversion,
:MYinstanceStartupTime, :MYstatus,
:MYdbName, :MYdbOpenMode, :MYdbOpenTime
FROM v$instance, v$pdbs ;


/* display query results */
printf("At %s which is on %s running %s and is %s, started at %s \n",
MYinstanceName.arr, MYhostName.arr, MYversion.arr, MYstatus.arr, MYinstanceStartupTime.arr);
printf("This is %s database running in %s mode since %s \n",
MYdbName.arr, MYdbOpenMode.arr, MYdbOpenTime.arr);
printf("\n");

/* end of MAIN */
}
oracle19c>


(Note :  I have put doube angle brackets for the #includes so as to preserve them in HTML)
Pro*C allows embedding of SQL calls into a C program be including the Proc include files and then running the source code through a Precompiler.
My Pro*C source code file is 2,255 bytes and the C source code is 11,875 bytes.

Note that the variables defined as varchar in my Pro*C source file are actually become C structures :

/* my variables */
/* varchar MYinstanceName[16]; */
struct { unsigned short len; unsigned char arr[16]; } MYinstanceName;

/* varchar MYhostName[64]; */
struct { unsigned short len; unsigned char arr[64]; } MYhostName;

/* varchar MYversion[17]; */
struct { unsigned short len; unsigned char arr[17]; } MYversion;

/* varchar MYstatus[12]; */
struct { unsigned short len; unsigned char arr[12]; } MYstatus;

/* varchar MYinstanceStartupTime[18]; */
struct { unsigned short len; unsigned char arr[18]; } MYinstanceStartupTime;

/* varchar MYdbName[128]; */
struct { unsigned short len; unsigned char arr[128]; } MYdbName;

/* varchar MYdbOpenMode[10]; */
struct { unsigned short len; unsigned char arr[10]; } MYdbOpenMode;

/* varchar MYdbOpenTime[32]; */
struct { unsigned short len; unsigned char arr[32]; } MYdbOpenTime;


Similarly, my EXEC SQL query also gets re-written :
{
struct sqlexd sqlstm;
sqlstm.sqlvsn = 13;
sqlstm.arrsiz = 8;
sqlstm.sqladtp = &sqladt;
sqlstm.sqltdsp = &sqltds;
sqlstm.stmt = "select instance_name ,host_name ,version ,to_char(startup\
_time,'DD-MON-RR HH24:MI:SS') ,status ,name ,open_mode ,to_char(open_time) int\
o :b0,:b1,:b2,:b3,:b4,:b5,:b6,:b7 from v$instance ,v$pdbs ";
sqlstm.iters = (unsigned int )1;
sqlstm.offset = (unsigned int )51;
sqlstm.selerr = (unsigned short)1;
sqlstm.sqlpfmem = (unsigned int )0;
sqlstm.cud = sqlcud0;
sqlstm.sqlest = (unsigned char *)&sqlca;
sqlstm.sqlety = (unsigned short)4352;
sqlstm.occurs = (unsigned int )0;
sqlstm.sqhstv[0] = (unsigned char *)&MYinstanceName;
sqlstm.sqhstl[0] = (unsigned long )18;
sqlstm.sqhsts[0] = ( int )0;
sqlstm.sqindv[0] = ( short *)0;
sqlstm.sqinds[0] = ( int )0;
sqlstm.sqharm[0] = (unsigned long )0;
sqlstm.sqadto[0] = (unsigned short )0;
sqlstm.sqtdso[0] = (unsigned short )0;
sqlstm.sqhstv[1] = (unsigned char *)&MYhostName;
and so on .....


Pro*C is a very good way of combining C programming with SQL and creating an executable binary instead of an interpreted file (like a Java or Python program outside the database).



Categories: DBA Blogs

Extract DB User Password

Michael Dinh - Fri, 2021-04-30 19:40

For some reason, I had a mental block in trying to extract password for database users.

Here are some options.

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

SQL>

-- From notes:
SQL> set echo off head off verify off feedb off pages 0 long 10000 longchunk 10000 trimspool on lines 2000 timing off term off
SQL> exec dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,'SQLTERMINATOR',TRUE);
SQL> exec dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,'PRETTY',TRUE);

SQL> r
  1  select 'alter user '||username||' identified by values '||REGEXP_SUBSTR(DBMS_METADATA.get_ddl ('USER',USERNAME), '''[^'']+''')||';' ddl
  2  from dba_users where username='CTXSYS'
  3*
alter user CTXSYS identified by values 'S:29174843DD6989AA921A152BF37264659F042C2D8C216D97F6176AF13E4F;T:7B311EEA53E028F937CFABC4D6F6142E3027195E099798CD9E67910ABE6C621E9C23780121F208451B95AB558A4862F206A917C93B50D96E8F573FE7E1A4B2E98D77B9504BC2EBB457B63600127E34D';

SQL>


-- From colleague
SQL> select name, spare4 from sys.user$ where name='CTXSYS';
CTXSYS
S:29174843DD6989AA921A152BF37264659F042C2D8C216D97F6176AF13E4F;T:7B311EEA53E028F937CFABC4D6F6142E3027195E099798CD9E67910ABE6C621E9C23780121F208451B95AB558A4862F206A917C93B50D96E8F573FE7E1A34B2E98D77B9504BC2EBB457B63600127E34D

SQL>

-- From asktom
SQL> r
  1  with t as
  2  (select TO_CHAR(dbms_metadata.get_ddl('USER','CTXSYS')) ddl from dual )
  3  select replace(substr(ddl,1,instr(ddl,'DEFAULT')-1),'CREATE','ALTER')||';' sql
  4* from t

ALTER USER "CTXSYS" IDENTIFIED BY VALUES 'S:29174843DD6989AA921A152BF37264659F042C2D8C216D97F6176AF13E4F;T:7B311EEA53E028F937CFABC4D6F6142E3027195E099798CD9E67910ABE6C621E9C23780121F208451B95AB558A4862F206A917C93B50D96E8F573FE7E1A34B2E98D77B9504BC2EBB457B63600127E34D';

SQL>
__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-608d63d21b1a2', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', onClick: function() { window.__tcfapi && window.__tcfapi( 'showUi' ); }, } } }); });

Regarding On Update Cascade

Tom Kyte - Fri, 2021-04-30 17:06
Dear Tom, We know that when we delete a parent record, automatically child record also will be deleted if we used "on delete cascade". Is it possible to update automatically a child record when we update parent record? (Do we have "On Update Cascade" option? Or any other like..)
Categories: DBA Blogs

How to send a sql query result as an attachment in Oracle apex_mail.send procedure.

Tom Kyte - Fri, 2021-04-30 17:06
I have a Oracle sql query which needs to be run and need to send the data returned by the query as an attachment to a mail. Could you please guide how can i do it using apex_mail.send procedure. I am calling apex_mail from database. I have already configured apex mail. I can call the apex_mail.send to send the mail. But i am not sure how can i attach the result returned by my oracle sql query in apex_mail.add attachment.
Categories: DBA Blogs

An example of ORA-01152: file … was not restored from a sufficiently old backup

Yann Neuhaus - Fri, 2021-04-30 10:00
By Franck Pachot

.


Oracle Error: 
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below 
ORA-01152: file ... was not restored from a sufficiently old backup 
ORA-01110: data file ...

This error is one of the most misunderstood recovery error. It can happen in different case but I have here a simple example to reproduce it, with some comments and queries to see the state. I run an example to avoid long theory but let’s put the main media recovery concepts in a few words. [Note that even if my idea was to put only a few words… it is longer than expected, so skip to the example if that’s what you were expecting]

Except when you are doing direct-path insert, the database files (datafiles) are written from the buffer cache, by the database writer, asynchronously, without caring about transactions. At any point in time, in a database that is opened, you may have commited changes for which the changed block is not yet written to the datafiles. You may also have changes written to the datafiles which are not commited yet or will be rolled back later. You may also have some part only of an atomic changes. There are points (checkpoint) where the database ensures that what has been changed in memory is written back to the datafiles. Those points are important as they mark the start of the redo log stream that is required to ensure that the changes done in memory can be applied (recovered) to the datafiles in case of database crash. The controlfile keeps track of this but the checkpoint information (time, SCN, redo threads,…) is also written to the datafile headers so that recovery is possible even of the current controlfile is lost, or restored from a backup.

This is for instance failure. But the stream of redo may be used also in case of media recovery where you had to restore a datafile from a previous backup, in order to bring the datafile to the current state. Then the datafile header shows a checkpoint from a previous point in time (before the last checkpoint of the current state) and recovery can be possible because the redo from before the last checkpoint has also been backed up as archived logs. And finally, there’s also the possibility that all datafiles are from past checkpoints, because you want to do Point-In-Time Recovery (PITR) either by restoring all datafiles from a previous backup, and recover up to this PIT only. Or you used flashback database to bring them back, which recovers to this PIT as well. In those cases, you will open to a new incarnation of the database, like a branch in the timeline, and OPEN RESETLOGS to explicitly tell Oracle that you know that your datafile state is different than the current state, as known in the controlfile and the online redo logs (which will then be discarded).

However, even if you have this possibility, with a datafile PIT state that does not match the controlfile one, there are some conditions that must be met in order to open the database without inconsistencies. Basically, they must be consistent among themselves. Because table data are in some datafiles, indexes may be in others, metadata in another one, undo vertors elswhere, and the same for foreign key parent table… So even if all recovery is correct (no changes lost thanks to redo log recovery) the database may refuse an OPEN RESETLOGS. And that’s basically what ORA-01152 tells you: your recovery is ok for each file, but the point you recovered to is not globally the same consistent state.

So, there are two major pieces of information that are checked by Oracle when opening the database. One is about the consistency of each datafiles and the other is about the consistency of the database. When the database is opened, there may be some changes that are from after the checkpoint recorded in the datafile header, because dbwriter continuously does its job. This is known as the datafile being “fuzzy”. Only when the database is closed, the fuzzy bit is cleared to say that all blocks of the datafiles are consistent with the checkpoint time. That’s for each datafile consistency. And in order to leave a clean state, closing the database also does a last checkpoint so that all datafiles are consistent. This can be opened without the need to apply some redo log, given that you want to get the database at the same point in time than it was closed. But once closed, you can do things that Oracle doesn’t know. Like restoring the files from a previous backup, even from a hot backup where the files were fuzzy. So, in any case, when Oracle opens the database it checks the datafile headers, as if it were not cleanly closed.


SQL> host mkdir -p /u90/{close,open}/{before,after}

I create directories to put a backup of a datafile. I’ll backup the datafile in the open or mount state (to have fuzzy and non fuzzy backups). And from two points in time (‘before’ and ‘after’ the state I want to open resetlogs)


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1692588               1692038            1691878                1692586

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ              CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ _______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    30-apr-2021 10:38:02        8196 1692038             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     05-aug-2019 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     05-aug-2019 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     05-aug-2019 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL           NO     YES    30-apr-2021 10:38:02        8196 1692038             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0

11 rows selected.

I’ll run those queries each time. They show the checkpoint SCN of my database, from the controlfile, and from my datafile headers. The FUZZY=YES column tells me that the database is opened, which means that there are changes in the datafiles that were written after the checkpoint. This is also visible by the flag 4 in FHSTA (or 8196 because 8192 is another flag for the SYSTEM datafiles). There are files that are not fuzzy even if the database is opened, because the tablespaces are in read-only, PDB$SEED in this example. You can see that their checkpoint time is from a long time ago because they havent been opened read-write since them. As they are not fuzzy, and checkpointed at the same SCN, they are consistent. And as they are read-only since then Oracle knows that they don’t need any recovery. I think we have a clue about this with the RECOVER column being null.


SQL> alter database begin backup;

Database altered.

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/open/before

SQL> alter database end backup;

Database altered.

I’ve taken a hot backup of this datafile. The backup mode ensures that recovery will be possible, but the file is still online, and fuzzy, with db writer writing to it. So the header stilll shows it fuzzy and with the last checkpoint SCN.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.

I’ve closed my database cleanly and started it in mount, which means not opened.


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1692886            1692000                1692860

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
          natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

11 rows selected.

After a clean shutdown, no files are fuzzy and all files where checkpointed at the same time: SCN 1692886 that we see in v$datafile and v$datafile_header. All consistent. You can see that the PDB datafiles have a SCN a little bit earlier but this is because the PDB are closed before the database is. Exactly the same as the read-only PDB$SEED. Then checkpoint is consistent for the container but earlier than the database and the RECOVER column is null.


SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/close/before

I’ve taken another backup of my datafile here, now in a non fuzzy state (like a cold backup)


SQL> create restore point now guarantee flashback database;
Restore point NOW created.

I’m taking a snapshot of my database here as I’ll come back to this point. This PIT that I call ‘now’ is the where I’ll try to restore the datafile from backups from before (what i just did) or from after (what i’m going to do before reverting back to this snapshot)


SQL> alter database open;
Database altered.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1693832               1692889            1692000                1692999

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
               natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    2021-04-30 11:03:00        8196 1692889             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

11 rows selected.

SQL> alter system checkpoint;

System CHECKPOINT altered.

SQL> alter database begin backup;

Database altered.

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/open/after

SQL> alter database end backup;

Same as I did before, a hot backup of my datafile, but from a later point in time.


Database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694252            1692000                1693891


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     NO     2021-04-30 11:05:43        8192 1694252             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/close/after

And finally a cold backup from a later point in time.


SQL> host ls -l /u90/{close,open}/{before,after}
/u90/close/after:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:07 users01CDB19.dbf

/u90/close/before:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:00 users01CDB19.dbf

/u90/open/after:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:05 users01CDB19.dbf

/u90/open/before:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 10:55 users01CDB19.dbf

I have 4 backups, from before and after, and in a clean or fuzzy state.


SQL> flashback database to restore point now;

Flashback succeeded.

Now back to my snapshot so that my current state is after the ‘before’ backup and before the ‘after’ backup. Sorry for this bad description of it, time travel is never easy to explain.


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694252            1692000                1692886


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
11 rows selected.

So, here we are. In closed (mount) state. No files are opened, No files are fuzzy. The checkpoint time is consistent – we will check only the CDB$ROOT now as we know the other containers where checkpointed earlier when closed. So CDB$ROOT checkpoint is at 10:56:41, SCN 1692886, which matches the controlfile SCN. I can OPEN RESETLOGS this database without any recovery but that’s not what I want to show.


SQL> host cp /u90/open/before/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I restored the datafile from the previous hot backup (older than my current state, and fuzzy)


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
  2* natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  YES    2021-04-30 10:53:23           1 1692602             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:11:00        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0

11 rows selected.

The file header shows the fuzzy state (FUZZY=Y) and this means that Oracle needs to apply some redo log, starting from the checkpoint SCN 1692602 and until it brings it to a consistent state, at least after the END BACKUP, to the end of fuzziness.


SQL> alter database open resetlogs;

Error starting at line : 1 in command -
alter database open resetlogs
Error report -
ORA-01195: online backup of file 7 needs more recovery to be consistent
ORA-01110: data file 7: '/u02/oradata/CDB19/users01CDB19.dbf'
01195. 00000 -  "online backup of file %s needs more recovery to be consistent"
*Cause:    An incomplete recovery session was started, but an insufficient
           number of logs were applied to make the file consistent. The
           reported file is an online backup which must be recovered to the
           time the backup ended.
*Action:   Either apply more logs until the file is consistent or
           restore the database files from an older backup and repeat recovery.
SQL>

I’ll not do this recovery. I’m just showing the error message. This is ORA-01195 that tells you you need more recovery to clear the fuzziness.


SQL> host cp /u90/close/before/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I’ve restored from the cold backup here. So no fuzzy flag in the header.


SQL> alter database open resetlogs;

Database altered.

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
  2* natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    2021-04-30 11:48:42        8196 1692890               1 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:45        8192 1693089               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0

11 rows selected.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1694039               1692890                  0                1693104

From this cold backup, I was able to OPEN RESETLOGS. Because this cold backup was taken when the database was closed, so all are consistent.

This was to show the ORA-01195 meaning: a datafile needs to be recovered to be consistent by itself.

Now we will see the consistency with the other datafiles by restoring a backup from the future.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.
SQL> flashback database to restore point now;

Flashback succeeded.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694381                  0                1692886

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0

11 rows selected.

as my OPEN RESETLOGS was sucessfull, I flashback database again to go to the same point in time for my experiment.


SQL> host cp /u90/close/after/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I’ve restored the cold backup (not fuzzy) but from a checkpoint that happened after my current state.


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 11:05:43           0 1694252             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0

11 rows selected.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694381                  0                1692886

You can see the datafile is not fuzzy but with a checkpoint at 11:05:43, SCN 1694252, where all other datafiles, for this container, and the controlfile, are at 10:56:41, SCN 1692886. My file is from a state in the future of the other ones.


SQL> alter database open resetlogs;

Error starting at line : 1 in command -
alter database open resetlogs
Error report -
ORA-01152: file 7 was not restored from a sufficiently old backup
ORA-01110: data file 7: '/u02/oradata/CDB19/users01CDB19.dbf'
01152. 00000 -  "file %s was not restored from a sufficiently old backup "
*Cause:    An incomplete recovery session was started, but an insufficient
           number of logs were applied to make the database consistent. This
           file is still in the future of the last log applied. The most
           likely cause of this error is forgetting to restore the file
           from a backup before doing incomplete recovery.
*Action:   Either apply more logs until the database is consistent or
           restore the database file from an older backup and repeat recovery.

Here is ORA-01152 and the message may be misleading because there can be several reasons. Maybe the problem is the file mentioned, because you restored it from a backup that is too recent when compared to the others and to the point in time you want to open resetlogs. Or maybe it was not restored at all and it is the current datafile that remains there because you forgot to restore one file. Or maybe you want to go to a further point in time by recovering the other datafile up to the same PIT as this one.

I’ll not go further here, this blog post is already too long. Of course, I’ll get the same error if I restore the fuzzy backup from the future. When you encounter this error you may think about the Point In Time you want to recover to. Either you have the right PIT and then you need to restore a backup of this datafile from before this point in time. Or maybe you want to recover to a further point in time to reach the state of this datafile. The error message supposes you have recovered to the right point in time but didn’t restore the right file.

I tried to summarize this situation in a tweet:

ORA-01152 is misleading.
The problem may be with the other files.
When I see:
ORA-01152: file #n was not restored from a sufficiently old backup
I think of #n as Marty McFly telling the other datafiles:
Guess you guys aren't ready for that yet…

(is it clear or should I blog?)

— Franck Pachot (@FranckPachot) April 30, 2021


Cet article An example of ORA-01152: file … was not restored from a sufficiently old backup est apparu en premier sur Blog dbi services.

Timestamp with time zone comparison Issues

Tom Kyte - Thu, 2021-04-29 22:46
Hi, I am facing an issue while validating timestamp with timezone data with systimestamp in 11g R2. My DB server is in US/Central zone. I have a table with timestamp with timezone data type column and I have inserted a future timestamp for same timezone (US/Central or UTC-5). While selecting data from table, we get same data. I also have an anonymous block which verifies if timestamp in table crossed systimestamp of not. Before daylight saving changes on March, this process was working correctly. both methods returns correct output when systimestamp is greater than timestamp with timezone column. However, after daylight saving changes, record which was inserted by giving timezone as US/Central format, returns correct output only after 1hr from actual time. I have given a sample in livesql, hope this can help to explain issue I am facing. Is there any specific reason for this behavior? Thanks in advance for your help Thanks, Manoj
Categories: DBA Blogs

CTE failes when used with a db link

Tom Kyte - Thu, 2021-04-29 22:46
I am trying to use a cte in a query that copies data from one database to another. The cte is used because I am unable to handle a cycle with connect by. In this simple illustration the db link used by the insert causes an error (ORA-00942: table or view does not exist). <code>insert into don.T2@gstest (RELATE_STRING) with cte (LVL, I, PARENT_I, RELATE) as ( select 1 as LVL, I, PARENT_I, '+' || RELATE as RELATE from don.T1@gsdev where PARENT_I is null union all select c.LVL + 1, t.I, t.PARENT_I, c.RELATE || '+' || t.RELATE from cte c join T1 t on t.PARENT_I = c.I) select RELATE from cte order by LVL, I; </code> The illustration doesn't have a cycle issue, so connect by can be used to demonstrate. If I ensure that the code is executed from the target database and I remove the db link, the code works. <code>insert into don.T2 (RELATE_STRING)</code>... I was unable to figure out how to make a db link in liveSql.
Categories: DBA Blogs

Fixed Broken Links

Bobby Durrett's DBA Blog - Wed, 2021-04-28 11:48

I had a couple of new comments on older blog pages and I noticed that some links pointed to things which no longer exist. I fixed a few things that I knew were wrong. Finally, this week I decided to review every post back to the beginning 9 years ago and click on every link.

For dead links I removed the link and added a comment like (DOES NOT EXIST). In some cases, the link still existed but was different. I changed several links to Oracle’s documentation for example. In many cases I put (updated) or something like that to show that there was a new link. I synced up a lot of the old links to the current version of my scripts on GitHub. Hopefully these changes won’t make the older posts unreadable, but at least the links will point to useful and up to date versions of things.

I did not carefully read every post because I was in a hurry, but I did look at every post and it gave me the chance to see how things changed over the past 9 years. It was kind of a quick review of what I was thinking. Some of the posts seemed kind of dumb. (What was I thinking?) But others are genuinely useful. But it was interesting to see which topics I talked about and how that changed over time. It makes me wonder where things will go in the future. I guess I cannot expect to fully predict the future, but it is good to think about what I should be learning and what skills I should be developing as things go forward.

Anyway, hopefully the updated links will make the posts a little more helpful to myself and others.

Bobby

Categories: DBA Blogs

editing large clobs

Tom Kyte - Wed, 2021-04-28 10:06
Hi, I've developed an application in APEX which provides a feature to maintain files stored in a CLOB (well, actually it's a BLOB but I know how to handle this). This works very well as long as the size of the content doesn't exceed the 32k limit. If a file is bigger than this limit an error 'ORA-06502: PL/SQL: numeric or value error' is raised. Is there any solution which I can use to handle files with more then 32k chars in an apex app?
Categories: DBA Blogs

"java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver"

Tom Kyte - Wed, 2021-04-28 10:06
May I ask about the "java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver". Per Java document, the PATH and CLASSPATH shall be configured in the Windows 10 environment variables . Below are information regarding the system/database version and what I did. The Java version is as below. C:\>java -version java version "1.8.0_281" Java(TM) SE Runtime Environment (build 1.8.0_281-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.281-b09, mixed mode) The JAVA_HOME is C:\Program Files\Java\jre\1.8.0_281 The CLASSPATH has C:\Program Files\Java\jdk\1.8.0_281 C:\Program Files\Java\jdk\1.8.0_281\bin C:\Program Files\Java\jre\1.8.0_281 C:\Program Files\Java\jre\1.8.0_281\bin C:\Oracle_RDBS_installation\product\12.2.0\client_1\bin C:\Oracle_RDBS_installation\product\12.2.0\dbhome_1\bin The compilation of Java program was successful by using "javac". But the class not found error message was returned during the run time. I have no problem accessing the Oracle database from Sql*Plus at all. I created two users, created a simple table for the sake of testing. Everything worked fine. I have no problem with the listener and the TNSNAES.ORA. Below is the information about database: SQL*Plus: Release 12.2.0.1.0 Production on Wed Apr 21 09:08:05 2021 Copyright (c) 1982, 2016, Oracle. All rights reserved. Enter user-name: c##_chiara Enter password: Last Successful login time: Sat Apr 17 2021 09:46:21 -04:00 Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Below is the Java code: <Begin of the Java Code> import java.sql.*; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Properties; public class Thirdj3 { public static void main(String[] args){ System.out.println("Beginning of the Thirdj3 program."); try{ //step1 load the driver class Class.forName("oracle.jdbc.driver.OracleDriver"); //step2 create the connection object Connection con=DriverManager.getConnection( "jdbc:oracle:thin:@DESKTOP-JBRID90.home:1521:orcl","c##_chiara","chiara"); //step3 create the statement object Statement stmt=con.createStatement(); //step4 execute query //*ResultSet rs=stmt.executeQuery("select * from emp"); //*while(rs.next()) //*System.out.println(rs.getInt(1)+" "+rs.getString(2)+" "+rs.getString(3)); //step5 close the connection object con.close(); } catch(Exception e){ System.out.println(e);} }//End of main }//End of Thirdj3 Class <end of the java code>
Categories: DBA Blogs

How can we query one database and get all databases' cell or IO usage from one database?

Tom Kyte - Wed, 2021-04-28 10:06
Hi Ask Tom Team, We had an issue in all of our production databases on an Exadata cluster. One RAC database's IO issue caused all databases on the cluster slowdown, user transactions took much more time, batch jobs hung. We need to run some scripts to consistently monitor the cluster besides OEM, so that we can identify the issues quicker and take actions sooner. OEM provides good information on our needs. We need your help to have related SQL scripts to generate same or similar information. In Oracle AWR reports. we can see a portion that is specific for Exadata. It appears in all AWR reports no matter in which database we run the awr report (global data), "Exadata Top Database Consumers". The information includes Cell usage and IO of all databases even though the AWR report is run only in one database (every AWR report has Exadata Top Database Consumers). Can you please help with related SQLs that generate that part of the AWR report? For example, how can we run a query to get "Top Databases by Requests" and so on by running queries only in one of the databases on the cluster. We checked all of the dictionary tables/views but failed to find any of them provide cluster information showing in AWR reports. We need to run the scripts (or stored procedure) periodically and have the system send us an email if any specific database's IO requests surpass a threshold we set. We can query live tables (GV$xxxxx) or DBA_HIST_XXXXX. Please advise. Thanks!
Categories: DBA Blogs

Script too slow - how to optimize

Tom Kyte - Wed, 2021-04-28 10:06
Can you advise me in how optimize this batch script : <code>CREATE OR REPLACE PACKAGE BODY ADMIN_WAFA.PCK_Trait_Pre IS -------------------------------------------------------------------------------- FUNCTION Fun_Traiter_Prelevement(P_CONTROL_ANO VARCHAR2) RETURN VARCHAR2 IS ------------------------------------------------------------------------------------------------------------------------- ---curseur prelevement----- CURSOR Prov_Cursor IS SELECT * FROM Prov_Prelevement; --------------------------- Code_Tier NUMBER; v_sql VARCHAR2(1000); --------------- ?????????????????????????????? ------------------------------------ BEGIN /*********************** Calcul des STATS ************************/ BEGIN DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'ADMIN_WAFA', TABNAME => 'PROV_PRELEVEMENT', DEGREE => 8, CASCADE => TRUE, METHOD_OPT => 'FOR COLUMNS SIZE AUTO', ESTIMATE_PERCENT => 20); END; /************************ Resoudre PB PERF ************************/ -- alter session set optimizer_mode=RULE; EXECUTE IMMEDIATE ('alter session set optimizer_mode=RULE'); --Debut FZ HANOUNOU IF(P_CONTROL_ANO ='O') THEN Proc_Test_Anomalie; END IF; --Fin FZ HANOUNOU --------------------------------------------- --Modification par Karim EL HALOUI --le 15/12/2010 --pour la mise a jour de la vue materialisee --------------------------------------------- -- suppression BEGIN EXECUTE IMMEDIATE ('DROP MATERIALIZED VIEW vue_anomalie_ie_aff'); EXCEPTION WHEN OTHERS THEN NULL; END; -- creation v_sql:='CREATE MATERIALIZED VIEW vue_anomalie_ie_aff AS SELECT a.ano_num_aff,a.ano_ide_reg,a.ano_dat_ech FROM anomalie a where a.ano_pret_a>45'; execute immediate v_sql; ------------------ ?????? ------------------- v_sql:='alter FUNCTION F_ECH_ANOM compile'; execute immediate v_sql; v_sql:='alter package PKG_CTRL_PROMESSE_REG compile body'; execute immediate v_sql; --------------------------------------------- v_nom_fonction:='Proc_Traiter_Prelevement'; open Prov_cursor; LOOP FETCH Prov_cursor INTO Prov_record; EXIT WHEN Prov_cursor%NOTFOUND OR Prov_cursor%NOTFOUND is NULL; ---------------------------------- -- recherche du code_societe ---------------------------------- select s.code_societe into v_code_societe from societes s where s.lib_societe=prov_record.prov_societe; --------------------------------------------- IF not Fun_Test_New_aff THEN IF Fun_Test_New_tier THEN Code_Tier:=Fun_Ajout_Tier; ELSE Code_Tier:=Fun_Cle_Tier; END IF; ELSE Code_Tier:=Fun_Ajout_Tier; END IF; Proc_Ajout_Prelevement(Code_Tier); END LOOP; execute immediate('truncate table prov_prelevement'); v_nom_fonction:='proc_generation_interne'; pck_ban.proc_generation_interne; -------------------------------------------------------------------- v_nom_fonction:='proc_saisie_imp_bqe_interne'; proc_saisie_imp_bqe_interne; v_nom_fonction:='pck_ban.proc_generation_doti'; pck_ban.proc_generation_doti; update /*+choose*/ prelevement p set p.pre_dat_ech=to_char(sysdate,'DD/MM/YYYY') where exists (select null from vacation v where v.pre_ide_reg=p.pre_ide_reg) and p.pre_ide_reg<0 and exists (select null from tiers t where p.tier_cle=t.tier_cle and t.tier_code_societe=51 and t.tier_nom_agence='REPRCTX') and p.pre_dat_ech>to_char(sysdate,'DD/MM/YYYY'...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator