Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 11 hours 56 min ago

Easily manage dual backup destination with RMAN

Wed, 2018-08-15 10:40

Backup on disk with RMAN is great. It’s fast, you can set as many channels as your platform can handle for faster backups. And you can restore as fast as you can read and write files on disk with these multiple channels. As far as you’re using Enterprise Edition because Standard Edition is stuck to a single channel.

Disk space is very often limited and you’ll probably have to find another solution to keep backups longuer if you want to. You can think about tapes or you can connect RMAN to a global backup tool, but it requires additional libraries that are not free, and it definitely adds complexity.

The other solution is to have dual disk destination for the backups. The first one will be the main destination for your daily backups, the other one will be dedicated to long-term backups, maybe on slower disks but with more free space available. This second destination can eventualy be backed up with another tool without using any library.

For the demonstration, assume you have 2 filesystems, /backup is dedicated to latest daily backups and /lt_backup is for long-term backups.

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

First of all, take a backup on the first destination:

RMAN> backup as compressed backupset database format '/oracle/backup/%U';

 

This is a small database and backup is done with the default single channel, so there is only two backupsets, one for the datafiles and the other for the controlfile and the spfile:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:27 backup/2btaj0mt_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:27 backup/2ctaj0nm_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

It’s quite easy to move the backup to the long term destination with RMAN:

RMAN> backup backupset all format '/oracle/lt_backup/%U' delete input;

 

BACKUP BACKUPSET with DELETE INPUT is basically the same as a system mv or move. But it does not require to recatalog the backup files as RMAN is doing this automatically.

Now our backup is located in the second destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:28 lt_backup/2btaj0mt_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:28 lt_backup/2ctaj0nm_1_2

 

You can see here that backup filename has changed: last number increased. Oracle knows that this is the second copy of these backupsets (even the first ones don’t exist anymore).

Like a mv command you can put again your backup in previous destination:

RMAN> backup backupset all format '/oracle/backup/%U' delete input;

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

All the backupsets are now back to the first destination only, and you can see another increase on the filename. And RMAN catalog is up-to-date.

Now let’s make the first folder the default destination for the backups, and go for compressed backupset as a default behavior:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET ;
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle/backup/%U';

 

Now you only need a 2-word command to backup the database:

RMAN> backup database;

 

New backup is in first destination as expected:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Suppose you want to move the oldest backups, those done before 1.30AM:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+1.5/24' format '/oracle/lt_backup/%U' delete input;

 

Everything is working as expected, latest backup is still in the first destination, and the oldest one is in the lt_backup filesystem. With another increase of the number ending the filename:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:38 lt_backup/2btaj0mt_1_4
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:38 lt_backup/2ctaj0nm_1_4

 

Now that the tests are OK, let’s simulate a real world example. First, tidy up all the backups:

RMAN> delete noprompt backupset;

 

Let’s take a new backup.

RMAN> backup database;

 

Backup is in default destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Let’s take another backup later:

RMAN> backup database;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Now let’s move the oldest backup to the other folder:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U' delete input;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2

 

Storing only the oldest backups in the long-term destination is not so clever, imagine you loose your first backup destination? It could be great to have the latest backup in both destinations. You can do that with a BACKUP BACKUPSET COMPLETED AFTER and no DELETE INPUT for basically the same as a cp or copy command:

RMAN> backup backupset completed after 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U';

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

323M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:03 lt_backup/2htaj2m4_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:03 lt_backup/2itaj2mt_1_2

 

That’s it, you now have a first destination for newest backups, and a second one for all the backups. And you just have to schedule these 2 BACKUP BACKUPSET after your daily backup of your database.

Note that backups will stay in both destinations until they reach the retention limit you defined for your database. The DELETE OBSOLETE will purge the backupsets wherever they are and delete all the known copies.

 

Cet article Easily manage dual backup destination with RMAN est apparu en premier sur Blog dbi services.

Oracle 18c DataGuard : Rman RECOVER STANDBY DATABASE

Wed, 2018-08-15 10:08

With Oracle Database 18c, we can now refresh a standby database over the network using one RMAN command, RECOVER STANDBY DATABASE.

The RECOVER STANDBY DATABASE command restarts the standby instance, refreshes the control file from the primary database, and automatically renames data files, temp files, and online logs. It restores new data files that were added to the primary database and recovers the standby database up to the current time.
When you use the RECOVER STANDBY DATABASE command to refresh a standby database, you specify either a FROM SERVICE clause or a NOREDO clause. The FROM SERVICE clause specifies the name of a primary service. The NOREDO clause specifies that backups should be used for the refresh, which allows a standby to be rolled forward to a specific time or SCN.
The MRP must be manually stopped on the standby before any attempt is made to sync with primary database.

In this blog I am doing some tests of standby refresh using the Recover Standby Database command.

From a fine Data Guard let’s set the property StandbyFileManagement to MANUAL

DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 18 seconds ago)

DGMGRL>

DGMGRL> edit database 'CONT18C_SITE' set property StandbyFileManagement=MANUAL;
Property "standbyfilemanagement" updated
DGMGRL> edit database 'CONT18C_SITE1' set property StandbyFileManagement=MANUAL;
Property "standbyfilemanagement" updated
DGMGRL> show  database 'CONT18C_SITE' StandbyFileManagement;
  StandbyFileManagement = 'manual'
DGMGRL> show  database 'CONT18C_SITE1' StandbyFileManagement;
  StandbyFileManagement = 'manual'
DGMGRL>

And Then I create add new tablespace and new table in the primary

SQL> create tablespace TBS_2 datafile '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf' size 5M ;

Tablespace created.

SQL> create table test (id number) tablespace TBS_2;

Table created.

SQL> insert into test values (1);

1 row created.

SQL> insert into test values (2);

1 row created.

SQL> commit;

Commit complete.

SQL>

As expected the changes are not being replicated as shown in the standby alert logfile and in the broker sonfiguration

(3):File #14 added to control file as 'UNNAMED00014' because
(3):the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
(3):The file should be manually created to continue.
MRP0 (PID:6307): MRP0: Background Media Recovery terminated with error 1274
2018-08-15T13:31:08.343276+02:00
Errors in file /u01/app/oracle/diag/rdbms/cont18c_site1/CONT18C/trace/CONT18C_mrp0_6307.trc:
ORA-01274: cannot add data file that was originally created as '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf'
MRP0 (PID:6307): Managed Standby Recovery not using Real Time Apply
Recovery interrupted!

Using the broker

DGMGRL> show database 'CONT18C_SITE1';

Database - CONT18C_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          4 minutes 33 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    CONT18C

  Database Error(s):
    ORA-16766: Redo Apply is stopped

  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold

Database Status:
ERROR

DGMGRL>

Now let’s try to sync the standby database using the RECOVER command. First let’s stop the recovery process.

DGMGRL> edit database 'CONT18C_SITE1' set state ='APPLY-OFF';
Succeeded.
DGMGRL> show database 'CONT18C_SITE1';

Database - CONT18C_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-OFF
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          26 minutes 28 seconds (computed 0 seconds ago)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    CONT18C

Database Status:
SUCCESS

DGMGRL>

After let’s connect with Rman as the target to the standby and let’s run the command
If we try to run the command while connecting to the primary as target we will get following error

RMAN> RECOVER STANDBY DATABASE FROM SERVICE CONT18c_SITE;

Starting recover at 15-AUG-18
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/15/2018 14:00:15
RMAN-05146: must be connected to standby database to issue RECOVER STANDBY DATABASE

RMAN>

So from the standby as target. Note that outputs are truncated

[oracle@primaserver admin]$ rman target sys/root@cont18c_site1

Recovery Manager: Release 18.0.0.0.0 - Production on Wed Aug 15 14:03:55 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONT18C (DBID=4292751651)

RMAN>  RECOVER STANDBY DATABASE FROM SERVICE CONT18c_SITE;

Starting recover at 15-AUG-18
using target database control file instead of recovery catalog
Executing: alter database flashback off
Oracle instance started

Total System Global Area     956299440 bytes

Fixed Size                     8902832 bytes
Variable Size                348127232 bytes
Database Buffers             595591168 bytes
Redo Buffers                   3678208 bytes

contents of Memory Script:
{
   restore standby controlfile from service  'CONT18c_SITE';
   alter database mount standby database;
}
executing Memory Script

Starting restore at 15-AUG-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=39 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
output file name=/u01/app/oracle/oradata/CONT18C/control01.ctl
output file name=/u01/app/oracle/oradata/CONT18C/control02.ctl
Finished restore at 15-AUG-18

released channel: ORA_DISK_1
Statement processed

contents of Memory Script:
{
set newname for datafile  14 to
 "/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf";
   restore from service  'CONT18c_SITE' datafile
    14;
   catalog datafilecopy  "/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf";
   switch datafile all;
}
executing Memory Script

executing command: SET NEWNAME

Starting restore at 15-AUG-18
Starting implicit crosscheck backup at 15-AUG-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=51 device type=DISK
Crosschecked 5 objects
Finished implicit crosscheck backup at 15-AUG-18

Starting implicit crosscheck copy at 15-AUG-18
using channel ORA_DISK_1
Finished implicit crosscheck copy at 15-AUG-18

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_47_fq7q5ls5_.arc
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_48_fq7qn5s3_.arc
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_49_fq7r0715_.arc
File Name: 
…
…

contents of Memory Script:
{
  recover database from service  'CONT18c_SITE';
}
executing Memory Script

Starting recover at 15-AUG-18
using channel ORA_DISK_1
skipping datafile 5; already restored to SCN 1550044
skipping datafile 6; already restored to SCN 1550044
skipping datafile 8; already restored to SCN 1550044
skipping datafile 14; already restored to SCN 2112213
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00001: /u01/app/oracle/oradata/CONT18C/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00003: 
…
…
destination for restore of datafile 00012: /u01/app/oracle/oradata/CONT18C/PDB1/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00013: /u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery

media recovery complete, elapsed time: 00:00:00
Finished recover at 15-AUG-18
flashback needs to be reenabled on standby open
Finished recover at 15-AUG-18

RMAN>

And we can verify that the configuration is now sync

DGMGRL> edit database 'CONT18C_SITE1' set state ='APPLY-ON';
Succeeded.
DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL>

After opening the standby on read only mode we can verify that everything is now fine

SQL> alter session set container=pdb1;

Session altered.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
TBS_NOLOG
TBS_2

7 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/CONT18C/PDB1/system01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/users01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf

6 rows selected.

SQL> select * from test;

        ID
----------
         1
         2

SQL>
 

Cet article Oracle 18c DataGuard : Rman RECOVER STANDBY DATABASE est apparu en premier sur Blog dbi services.

Oracle Database on OpenShift

Tue, 2018-08-14 15:39
By Franck Pachot

.
In a previous post I described the setup of MiniShift on my laptop in order to run OpenShift for test purpose. I even pulled the Oracle Database image from the Docker Store. But the goal is to import it into OpenShift to deploy it from the Image Stream.

I start MiniShift on my laptop, specifying a larger disk (default is 20GB)

C:\Users\Franck>minishift start --disk-size 40g
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 40 GB
-- Starting Minishift VM .................................................................... OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 1% used OK
Importing 'openshift/origin:v3.9.0' ............. OK
Importing 'openshift/origin-docker-registry:v3.9.0' ... OK
Importing 'openshift/origin-haproxy-router:v3.9.0' ...... OK
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ...........................................................
Using nsenter mounter for OpenShift volumes
Using public hostname IP 192.168.99.105 as the host IP
Using 192.168.99.105 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
 
The server is accessible via web console at:
https:⁄⁄192.168.99.105:8443
 
You are logged in as:
User: developer
Password:
 
To login as administrator:
oc login -u system:admin

MiniShift is starting a VirualBox and gets an IP address from the VirtualBox DHCP – here 192.168.99.105
I can access to the console https://192.168.99.105:8443 and log as developer or admin but for the moment I’m continuing in command line.

At any moment I can log to the VM running OpenShift with the minishift command. Here checking the size of the disks

C:\Users\Franck>minishift ssh
 
[docker@minishift ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/live-rw 9.8G 697M 9.0G 8% /
devtmpfs 974M 0 974M 0% /dev
tmpfs 1000M 0 1000M 0% /dev/shm
tmpfs 1000M 18M 983M 2% /run
tmpfs 1000M 0 1000M 0% /sys/fs/cgroup
/dev/sr0 344M 344M 0 100% /run/initramfs/live
/dev/sda1 39G 1.8G 37G 5% /mnt/sda1
tmpfs 200M 0 200M 0% /run/user/0
tmpfs 200M 0 200M 0% /run/user/1000

Build the Docker image

The goal is to run in OpenShift a container from an image that has been build somewhere else. In this example I’ll not build one but use one provided on the Docker store: the Oracle Database ‘slim’ image. For this example, I’ll use the minishift VM docker, just because it is there.

I have DockerTools installed on my laptop and just want to set the environment to connect to the docker server on the minishift VM. I can get the environment from minishift:

C:\Users\Franck>minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.105:2376
SET DOCKER_CERT_PATH=C:\Users\Franck\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Here is how to directly set the environemnt from it:

C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Now my docker commands will connect to this docker server. Here are the related info, minishift is already running several containers there for its own usage:

C:\Users\Franck>docker info
Containers: 9
Running: 7
Paused: 0
Stopped: 2
Images: 6
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log:
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: e9c345b3f906d5dc5e8100b05ce37073a811c74a (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 3.10.0-862.6.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.953GiB
Name: minishift
ID: U7IQ:TE3X:HSGK:3ES2:IO6G:A7VI:3KUU:YMBC:3ZIR:QYUL:EQUL:VFMS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: pachot
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
172.30.0.0/16
127.0.0.0/8
Live Restore Enabled: false

As for this example, I’ll use the Oracle Database image, I need to log to the Docker Store to prove that I accept the licensing conditions:

C:\Users\Franck>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username:
Password:
Login Succeeded

I pull the image, takes some time because ‘slim’ means 2GB with Oracle Database.

C:\Users\Franck>docker pull store/oracle/database-enterprise:12.2.0.1-slim
Trying to pull repository docker.io/store/oracle/database-enterprise ...
12.2.0.1-slim: Pulling from docker.io/store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
be0a1f1e8dfd: Pull complete
Digest: sha256:dbd87ae4cc3425dea7ba3d3f34e062cbd0afa89aed2c3f3d47ceb5213cc0359a
Status: Downloaded newer image for docker.io/store/oracle/database-enterprise:12.2.0.1-slim

Here is the image:

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

My minishift VM disk has increased by 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Push the image to OpenShift registry

OpenShift has its integrated container registry from which the Docker images are visible to Image Stream.
Here is the address of the registry:

C:\Users\Franck>minishift openshift registry
172.30.1.1:5000

I’ll run some OpenShift commands and the path to the minishift cache for ‘oc’ can be set with:

C:\Users\Franck>minishift oc-env
SET PATH=C:\Users\Franck\.minishift\cache\oc\v3.9.0\windows;%PATH%
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
 
C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

I am still connected as developer to OpenShift:

C:\Users\Franck>oc whoami
developer

and I get the login token:

C:\Users\Franck>oc whoami -t
lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc

I use this token to login to the OpenShift registry with docker in order to be able to push the image:

C:\Users\Franck>docker login -u developer -p lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc 172.30.1.1:5000
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

I create a new project to import the image to:

C:\Users\Franck>oc new-project oracle --display-name=Oracle
Now using project "oracle" on server "https://192.168.99.105:8443".
 
You can add applications to this project with the 'new-app' command. For example, try:
 
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
 
to build a new example application in Ruby.

This can also be done from the GUI. Here is the project on the right:
CaptureOpenShiftProject

I tag the image with the name of the registry (172.30.1.1:5000) and the name of the project (oracle) and add an image name, so that the full name is: 172.30.1.1:5000/oracle/ora122slim

C:\Users\Franck>docker tag store/oracle/database-enterprise:12.2.0.1-slim 172.30.1.1:5000/oracle/ora122slim

We can see this tagged image

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
172.30.1.1:5000/oracle/ora122slim latest 27c9559d36ec 12 months ago 2.08GB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

Note that it is the same IMAGE ID and doesn’t take more space:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Then I’m finally ready to pull the image to the OpenShift docker registry:

C:\Users\Franck>docker push 172.30.1.1:5000/oracle/ora122slim
The push refers to a repository [172.30.1.1:5000/oracle/ora122slim] 066e811424fb: Pushed
99d7f2451a1a: Pushed
a2c532d8cc36: Pushed
49c80855196a: Pushed
40c24f62a02f: Pushed
latest: digest: sha256:25b0ec7cc3987f86b1e754fc214e7f06761c57bc11910d4be87b0d42ee12d254 size: 1372

This is a copy, and takes an additional 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 5.4G 33G 14% /mnt/sda1

Deploy the image

Finally, I can deploy the image as it is visible in the GUI:
CaptureOpenShiftImport

I choose to deploy from fommand line:

C:\Users\Franck>oc new-app --image-stream=ora122slim --name=ora122slimdeployment
--> Found image 27c9559 (12 months old) in image stream "oracle/ora122slim" under tag "latest" for "ora122slim"
 
* This image will be deployed in deployment config "ora122slimdeployment"
* Ports 1521/tcp, 5500/tcp will be load balanced by service "ora122slimdeployment"
* Other containers can access this service through the hostname "ora122slimdeployment"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/ora122slimdeployment --add ...'

--> Creating resources ...
imagestreamtag "ora122slimdeployment:latest" created
deploymentconfig "ora122slimdeployment" created
service "ora122slimdeployment" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/ora122slimdeployment'
Run 'oc status' to view your app.

CaptureOpenShiftDeploy

I expose the service:

C:\Users\Franck>oc expose service ora122slimdeployment
route "ora122slimdeployment" exposed

/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

Here is one little thing to change. From the POD terminal, I can see the following error:
CaptureOpenShiftCrash

The same can be read from command line:

C:\Users\Franck>oc status
In project Oracle (oracle) on server https://192.168.99.105:8443
 
http://ora122slimdeployment-oracle.192.168.99.105.nip.io to pod port 1521-tcp (svc/ora122slimdeployment)
dc/ora122slimdeployment deploys istag/ora122slim:latest
deployment #1 deployed 7 minutes ago - 0/1 pods (warning: 6 restarts)
 
Errors:
* pod/ora122slimdeployment-1-86prl is crash-looping
 
1 error, 2 infos identified, use 'oc status -v' to see details.
 
C:\Users\Franck>oc logs ora122slimdeployment-1-86prl -c ora122slimdeployment
/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

This is because by default, for security reason, OpenShift runs the container with a random user id. But the files are executable only by oracle:

sh-4.2$ ls -l /home/oracle/setup/dockerInit.sh
-rwxr-xr--. 1 oracle oinstall 2165 Aug 17 2017 /home/oracle/setup/dockerInit.sh
sh-4.2$

The solution is quite simple: allow the container to run with its own user id:

C:\Users\Franck>minishift addon apply anyuid
-- Applying addon 'anyuid':.
Add-on 'anyuid' changed the default security context constraints to allow pods to run as any user.
Per default OpenShift runs containers using an arbitrarily assigned user ID.
Refer to https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints and
https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-origin-specific-guidelines for more information.

The the restart of the POD will go further:
CaptureOpenShiftOracle

This Oracle Database from the Docker Store is not really an image of an installed Oracle Database, but just a tar of Oracle Home and Database files that have to be untared.

Now, in addition to the image size I have an additional 2GB layer for the container:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 11G 28G 28% /mnt/sda1
 
C:\Users\Franck>docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 7 6 3.568GB 1.261GB (35%)
Containers 17 9 1.895GB 58.87kB (0%)
Local Volumes 0 0 0B 0B
Build Cache 0B 0B

Of course there is more to customize. The minishift VM should have more memory and the container for Oracle Database as well. We probably want to add an external volume, and export ports outside of the minishift VM.

 

Cet article Oracle Database on OpenShift est apparu en premier sur Blog dbi services.

ODA database been stuck in deleting status

Tue, 2018-08-14 15:03

Facing an internal inconsistency in the ODA derby database is very painful (see https://blog.dbi-services.com/oda-lite-what-is-this-odacli-repository/ for more info about the derby database). I have recently faced a case where the database deletion was failing and the database remained then in “Deleting” status.  Connecting directly to the internal derby database and doing some self cleaning is very risky and should be performed at your own and known risk. So, in most of the case, a database inconsistency issue ends with an Oracle Support ticket to get their help for cleaning. Before doing so I wanted to look closer to the issue and was very happy to fix it myself. I wanted to share my experience here.

Issue description

As explained in the introduction, the database deletion failed and the database remained in “Deleting” status.

[root@prod1 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
ea49c5a8-8747-4459-bb99-cd71c8c87d58     testtst1   Si       12.1.0.2             false      OLTP     Odb1s    ACFS       Deleting     80a2e501-31d8-4a5d-83db-e04dad34a7fa

Looking at the job activity log, we can see that the deletion is failing while trying to delete the FileSystem.

[root@prod1 ~]# odacli describe-job -i 50a8c1c2-686e-455e-878f-eaa537295c9f

Job details
----------------------------------------------------------------
                     ID:  50a8c1c2-686e-455e-878f-eaa537295c9f
            Description:  Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58
                 Status:  Failure
                Created:  July 25, 2018 9:40:17 AM CEST
                Message:  DCS-10011:Input parameter 'ACFS Device for delete' cannot be NULL.

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
database Service deletion for ea49c5a8-8747-4459-bb99-cd71c8c87d58 July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure
database Service deletion for ea49c5a8-8747-4459-bb99-cd71c8c87d58 July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure
Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:17 AM CEST       Success
Database Deletion                        July 25, 2018 9:40:18 AM CEST       July 25, 2018 9:40:18 AM CEST       Success
Unregister Db From Cluster               July 25, 2018 9:40:18 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Kill Pmon Process                        July 25, 2018 9:40:19 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Database Files Deletion                  July 25, 2018 9:40:19 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Deleting FileSystem                      July 25, 2018 9:40:21 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure

I decided to have a look why it would have failed on the file system deletion step, and I was very surprised to see there was no data volume for this database anymore. This can be seen in the below volinfo command output. Not sure what happened, but it is weird : why failing if what you want to delete is no more existing and stopping processing further.

ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

Diskgroup Name: RECO

         Volume Name: RECO
         Volume Device: /dev/asm/reco-403
         State: ENABLED
         Size (MB): 304128
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /u03/app/oracle/
 Solution

So why not trying to give the ODA what he is expecting to see? Therefore I tried to create the ACFS volume with exact naming and I was very happy to see that this solved the problem. There was no other relation key than the name of the volume. Let’s look in details the steps I performed.

Let’s create the database expected data volume.

ASMCMD> volcreate -G DATA -s 10G DATTESTTST1

ASMCMD> volinfo -G DATA -a
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265 
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

         Volume Name: DATTESTTST1
         Volume Device: /dev/asm/dattesttst1-265
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

Let’s create the file system for the newly created volume.

grid@prod1:/home/grid/ [+ASM1] mkfs.acfs /dev/asm/dattesttst1-265
mkfs.acfs: version                   = 12.2.0.1.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/dattesttst1-265
mkfs.acfs: volume size               = 10737418240  (  10.00 GB )
mkfs.acfs: Format complete.

Let’s check the expected mount points needed for the corresponding database.

[root@prod1 ~]# odacli describe-dbstorage -i 31d852f7-bdd0-40f5-9224-2ca139a2c3db
DBStorage details
----------------------------------------------------------------
                     ID: 31d852f7-bdd0-40f5-9224-2ca139a2c3db
                DB Name: testtst1
          DBUnique Name: testtst1_RZ1
         DB Resource ID: ea49c5a8-8747-4459-bb99-cd71c8c87d58
           Storage Type: Acfs
          DATA Location: /u02/app/oracle/oradata/testtst1_RZ1
          RECO Location: /u03/app/oracle/fast_recovery_area/
          REDO Location: /u03/app/oracle/redo/
   FLASH Cache Location:
                  State: ResourceState(status=Configured)
                Created: July 18, 2018 10:28:39 AM CEST
            UpdatedTime: July 18, 2018 10:29:01 AM CEST

In order to add and start the appropriate file system.

[root@prod1 testtst1_RZ1]# cd /u01/app/12.2.0.1/grid/bin/
[root@prod1 bin]# ./srvctl add filesystem -volume DATTESTTST1 -diskgroup DATA -path /u02/app/oracle/oradata/testtst1_RZ1 -fstype ACFS -autostart ALWAYS -mountowner oracle
[root@prod1 bin]# ./srvctl start filesystem -device /dev/asm/dattesttst1-265

Let’s check the mounted file system.

[root@prod1 bin]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G   24G  4.1G  86% /
tmpfs                 189G  1.3G  187G   1% /dev/shm
/dev/md0              477M   40M  412M   9% /boot
/dev/sda1             500M  320K  500M   1% /boot/efi
/dev/mapper/VolGroupSys-LogVolOpt
                       59G   13G   44G  22% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       99G   25G   69G  27% /u01
/dev/asm/commonstore-265
                      5.0G  319M  4.7G   7% /opt/oracle/dcs/commonstore
/dev/asm/reco-403     297G   14G  284G   5% /u03/app/oracle
/dev/asm/dattesttst1-265
                       10G  265M  9.8G   3% /u02/app/oracle/oradata/testtst1_RZ1

Let’s now try to delete the database again. Option -fd is mandatory to force deletion.

[root@prod1 bin]# odacli delete-database -i ea49c5a8-8747-4459-bb99-cd71c8c87d58 -fd
{
  "jobId" : "976c8689-a69d-4e0d-a5e0-e40a30a77d29",
  "status" : "Running",
  "message" : null,
  "reports" : [ {
    "taskId" : "TaskZJsonRpcExt_471",
    "taskName" : "Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion",
    "taskResult" : "",
    "startTime" : "July 25, 2018 10:04:24 AM CEST",
    "endTime" : "July 25, 2018 10:04:24 AM CEST",
    "status" : "Success",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_469",
    "jobId" : "976c8689-a69d-4e0d-a5e0-e40a30a77d29",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "July 25, 2018 10:04:24 AM CEST"
  } ],
  "createTimestamp" : "July 25, 2018 10:04:23 AM CEST",
  "resourceList" : [ ],
  "description" : "Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58",
  "updatedTime" : "July 25, 2018 10:04:23 AM CEST"
}

The database deletion is now successful.

[root@prod1 bin]# odacli describe-job -i 976c8689-a69d-4e0d-a5e0-e40a30a77d29

Job details
----------------------------------------------------------------
                     ID:  976c8689-a69d-4e0d-a5e0-e40a30a77d29
            Description:  Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58
                 Status:  Success
                Created:  July 25, 2018 10:04:23 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Database Deletion                        July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Unregister Db From Cluster               July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Kill Pmon Process                        July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Database Files Deletion                  July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:25 AM CEST      Success
Deleting Volume                          July 25, 2018 10:04:30 AM CEST      July 25, 2018 10:04:32 AM CEST      Success

Let’s check the volume and file system to make sure they have been removed.

ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

Diskgroup Name: RECO

         Volume Name: RECO
         Volume Device: /dev/asm/reco-403
         State: ENABLED
         Size (MB): 304128
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /u03/app/oracle/

grid@prod1:/home/grid/ [+ASM1] df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G   24G  4.1G  86% /
tmpfs                 189G  1.3G  187G   1% /dev/shm
/dev/md0              477M   40M  412M   9% /boot
/dev/sda1             500M  320K  500M   1% /boot/efi
/dev/mapper/VolGroupSys-LogVolOpt
                       59G   13G   44G  22% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       99G   25G   69G  27% /u01
/dev/asm/commonstore-265
                      5.0G  319M  4.7G   7% /opt/oracle/dcs/commonstore
/dev/asm/reco-403     297G   14G  284G   5% /u03/app/oracle
grid@prod1:/home/grid/ [+ASM1]

Listing the database would show that the unique database has now been deleted.

[root@prod1 bin]# odacli list-databases
DCS-10032:Resource database is not found.

To complete the test and make sure all is ok, I created a new database, which I expected would be successful.

[root@prod1 bin]# odacli describe-job -i cf896c7f-0675-4980-a63f-a8a2b09b1352

Job details
----------------------------------------------------------------
                     ID:  cf896c7f-0675-4980-a63f-a8a2b09b1352
            Description:  Database service creation with db name: testtst2
                 Status:  Success
                Created:  July 25, 2018 10:12:24 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               July 25, 2018 10:12:25 AM CEST      July 25, 2018 10:12:25 AM CEST      Success
Creating volume dattesttst2              July 25, 2018 10:12:25 AM CEST      July 25, 2018 10:12:36 AM CEST      Success
Creating ACFS filesystem for DATA        July 25, 2018 10:12:36 AM CEST      July 25, 2018 10:12:44 AM CEST      Success
Database Service creation                July 25, 2018 10:12:44 AM CEST      July 25, 2018 10:18:49 AM CEST      Success
Database Creation                        July 25, 2018 10:12:44 AM CEST      July 25, 2018 10:17:36 AM CEST      Success
Change permission for xdb wallet files   July 25, 2018 10:17:36 AM CEST      July 25, 2018 10:17:36 AM CEST      Success
Place SnapshotCtrlFile in sharedLoc      July 25, 2018 10:17:36 AM CEST      July 25, 2018 10:17:37 AM CEST      Success
Running DataPatch                        July 25, 2018 10:18:34 AM CEST      July 25, 2018 10:18:47 AM CEST      Success
updating the Database version            July 25, 2018 10:18:47 AM CEST      July 25, 2018 10:18:49 AM CEST      Success
create Users tablespace                  July 25, 2018 10:18:49 AM CEST      July 25, 2018 10:18:51 AM CEST      Success



[root@prod1 bin]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
e0e8163d-dcaa-4692-85c5-24fb9fe17291     testtst2   Si       12.1.0.2             false      OLTP     Odb1s    ACFS       Configured   80a2e501-31d8-4a5d-83db-e04dad34a7fa

 

 

 

Cet article ODA database been stuck in deleting status est apparu en premier sur Blog dbi services.

Upgrade EM 13.2 to EM 13.3

Tue, 2018-08-14 10:05

As the last Enterprise Manager Cloud Control 13.3 is out since a few days, I decided to test the upgrade procedure from the Enterprise Manager Cloud Control 13.2

You have to follow some pre-requisites:

First you copy the emkey :

oracle@localhost:/home/oracle/ [oms13c] emctl config emkey 
-copy_to_repos_from_file -repos_conndesc '"(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=em13c)(PORT=1521)))(CONNECT_DATA=
(SERVICE_NAME=EMREP13C)))"' -repos_user sysman -repos_pwd manager1 
-emkey_file /home/oracle/oms13c/sysman/config/emkey.ora
Oracle Enterprise Manager Cloud Control 13c Release 2  
Copyright (c) 1996, 2016 Oracle Corporation.  All rights reserved.
Enter Admin User's Password : 
The EMKey has been copied to the Management Repository. 
This operation will cause the EMKey to become unsecure.
After the required operation has been completed, secure the EMKey by running 
"emctl config emkey -remove_from_repos".

Check that the parameter in the repository database “_allow_insert_with_update_check” is TRUE:

SQL> show parameter _allow

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
_allow_insert_with_update_check      boolean	 TRUE

Just before running the upgrade procedure, you have to stop the OMS with the command emctl stop oms -all , and to stop the agent with the classical command emctl stop agent.

I will also recommend to run a full rman backup of the repository database.

Then once you have unzipped the binaries you have downloaded, you simply run the command:

oracle@localhost:/home/oracle/software/ [oms13c] ./em13300_linux64.bin 
0%...........................................................................100%
Launcher log file is /tmp/OraInstall2018-08-13_10-45-07AM/
launcher2018-08-13_10-45-07AM.log.
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 2591.940 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   
Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 5567 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed 
Preparing to launch the Oracle Universal Installer from 
/tmp/OraInstall2018-08-13_10-45-07AM
====Prereq Config Location main=== 
/tmp/OraInstall2018-08-13_10-45-07AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2018-08-13_10-45-07AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2018-08-13_10-45-07AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_WIQ10z/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/home/oracle/software
EMFileLoc:/tmp/OraInstall2018-08-13_10-45-07AM/oui/em/
ScratchPathValue :/tmp/OraInstall2018-08-13_10-45-07AM

Picture1

 

Picture2

I skipped the Updates

Picture3png

The check are successfull

Picture4

We upgrade an existing Enterprise Manager System, we enter the existing Middleware home.

Picture5

We enter the new Middleware home.

Picture6

We enter the sys and sysman passwords.

Picture7

We can select additional plug-ins

Picture8

We enter the Weblogic password

Picture9

We do not share location for Oracle BI Publisher, but we enable BI Publisher

Picture11

We choose the default configuration ports.

Picture12

Picture13

At this time you can drink some coffees because the upgrade procedure takes a long time …

Picture14

Just before the end of the upgrade process, you have to run the allroot.sh script:

[root@localhost oms133]# ./allroot.sh

Starting to execute allroot.sh ......... 

Starting to execute /u00/app/oracle/oms133/root.sh ......
/etc exist
/u00/app/oracle/oms133
Finished execution of  /u00/app/oracle/oms133/root.sh ......

Picture15

The upgrade is successful :=)

Picture16

But the upgrade is not yet finished, you have to restart and upgrade the management agent and delete the old OMS installation

In order to upgrade the agent, you select the Upgrade agent from the tool menu:

Picture17

But I had a problem with my agent in 13.2 version. The agent was in a non-upgradable state, and Oracle recommended to run emctl control agent runCollection <target>:oracle_home oracle_home_config but the command did not work and saying : EMD runCollection error:no target collection

So I decided to delete the agent and to install manually a new agent following the classical GUI method.

The agent in version 13.3 in now up and running:

Picture18

As in the previous Enterprise Manager versions, the deinstallation is very easy. You only have to check if any old processes are running:

oracle@localhost:/home/oracle/oms13c/oui/bin/ [oms13c] ps -ef | grep /home/oracle/oms13c
oracle    9565 15114  0 11:51 pts/0    00:00:00 grep --color=auto /home/oracle/oms13c

Then we simply delete the old OMS HOME:

oracle@localhost:/home/oracle/ [oms13c] rm -rf oms13c

There are not so many features in Enterprise Manager 13.3. they concern the framework and infrastructure, the Middleware Management,  the Cloud management and the Database management. You can have a look at those new features:

https://docs.oracle.com/cd/cloud-control-13.3/EMCON/GUID-503991BC-D1CD-46EC-8373-8423B2D43437.htm#EMCON-GUID-503991BC-D1CD-46EC-8373-8423B2D43437

Even if the upgrade procedure lasted a long time, I did not encounter any blocking errors. The upgrade procedure is quite the same as before.

Furthermore with Enterprise Manager 13.3, we have support for monitoring and management for Oracle databases version 18c:

Picture19

 

Cet article Upgrade EM 13.2 to EM 13.3 est apparu en premier sur Blog dbi services.

Licensable targets and Management Packs with EM13c

Tue, 2018-08-14 09:42

When you add a new target in Enterprise Manager 13c , the management packs are enabled by default. This could be a problem in case of a LMS control, and to avoid any problem, you have to  manually disable those management packs.

If like me you recently have moved your database infrastructure to a new one and have to add one hundred targets, you will have to click some hundredth of times on the management packs page in EM13C.

As you can see I added a new Oracle Database target and all the management packs are enabled:

mgmt1

 

There is a possibility to define the way EM13c manages the licensable targets. In the management Packs page , you select Auto Licensing, you select the packs you want to disable, and the next time you will add a new target (oracle database host database or weblogic), only the packs you have defined as enabled will be defined for the targets you decided to add:

mgmt2

I decided to keep enabled only the Database Tuning Pack and Database Diagnostics Pack.

mgmt3

And now the management packs are correct when I add a new database target:

mgmt4

I have missed this feature for a long time, I would have earned a lot of time avoiding deactivating the packs in Enterprise Manager Cloud Control :=)

 

 

 

Cet article Licensable targets and Management Packs with EM13c est apparu en premier sur Blog dbi services.

Using the managed PostgreSQL service in Azure

Tue, 2018-08-14 00:59

In the last post we had a look on how you can bring up a customized PostgreSQL instance in the Azure cloud. Now I want to check what you can do with the managed service. For the managed service I am expecting that I can bring up a PostgreSQL quite easily and fast and that I can add replicas on demand. Lets see what is there and how you can use it.

Of course we need to login again:

dwe@dwe:~$ cd /var/tmp
dwe@dwe:/var/tmp$ az login

The az command for working with PostgreSQL is simply “postgres”:

dwe@dwe:~$ az postgres --help

Group
    az postgres : Manage Azure Database for PostgreSQL servers.

Subgroups:
    db          : Manage PostgreSQL databases on a server.
    server      : Manage PostgreSQL servers.
    server-logs : Manage server logs.

Does not look like we can do much but you never know so lets bring up an instance. Again, we need a resource group first:

dwe@dwe:~$ az group create --name PGTEST --location "westeurope"
{
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST",
  "location": "westeurope",
  "managedBy": null,
  "name": "PGTEST",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}

Lets try to bring up an instance with a little storage (512MB), SSL enabled and the standard postgres user:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 512 --admin-user postgres --admin-password xxxxx --location westeurope
Deployment failed. Correlation ID: e3cd6d04-3557-4c2a-b70f-7c11a61c395d. Server name 'PG1' cannot be empty or null. It can only be made up of lowercase letters 'a'-'z', the numbers 0-9 and the hyphen. The hyphen may not lead or trail in the name.

Ok, seems upper case letters are not allowed, try again:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 512 --admin-user postgres --admin-password postgres --location westeurope
Deployment failed. Correlation ID: e50ca5d6-0e38-48b8-8015-786233c0d103. The storage size of 512 MB does not meet the minimum required storage of 5120 MB.

Ok, we need a minimum of 5120 MB of storage, again:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 5120 --admin-user postgres --admin-password postgres --location westeurope
Deployment failed. Correlation ID: 470975ce-1ee1-4531-8703-55947772fb51. Password validation failed. The password does not meet policy requirements because it is not complex enough.

This one is good as it at least denies the postgres/postgres combination. Again with a better password:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 5120 --admin-user postgres --admin-password "xxx" --location westeurope
{
  "administratorLogin": "postgres",
  "earliestRestoreDate": "2018-08-13T12:30:10.763000+00:00",
  "fullyQualifiedDomainName": "mymanagedpg1.postgres.database.azure.com",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1",
  "location": "westeurope",
  "name": "mymanagedpg1",
  "resourceGroup": "PGTEST",
  "sku": {
    "capacity": 2,
    "family": "Gen4",
    "name": "B_Gen4_2",
    "size": null,
    "tier": "Basic"
  },
  "sslEnforcement": "Enabled",
  "storageProfile": {
    "backupRetentionDays": 7,
    "geoRedundantBackup": "Disabled",
    "storageMb": 5120
  },
  "tags": null,
  "type": "Microsoft.DBforPostgreSQL/servers",
  "userVisibleState": "Ready",
  "version": "9.6"
}

Better. What I am not happy with is that the default seems to be PostgreSQL 9.6. PostgreSQL 10 is out around a year now and that should definitely by the default. In the portal it looks like this and there you can also find the information required for connecting to the instance:
Selection_004

So lets try to connect:

dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1
psql: FATAL:  no pg_hba.conf entry for host "x.x.x.xx", user "postgres", database "postgres@mymanagedpg1", SSL on
FATAL:  SSL connection is required. Please specify SSL options and retry.

How do we manage that with the managed PostgreSQL service? Actually there is no az command to modify pg_hba_conf but what we need to do is to create a firewall rule:

dwe@dwe:~$ az postgres server firewall-rule create -g PGTEST -s mymanagedpg1 -n allowall --start-ip-address 0.0.0.0 --end-ip-address 255.255.255.255
{
  "endIpAddress": "255.255.255.255",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/firewallRules/allowall",
  "name": "allowall",
  "resourceGroup": "PGTEST",
  "startIpAddress": "0.0.0.0",
  "type": "Microsoft.DBforPostgreSQL/servers/firewallRules"
}

Of course you should not open to the whole world as I am doing here. When the rule is in place connections do work:

dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1 postgres
Password for user postgres@mymanagedpg1: 
psql (9.5.13, server 9.6.9)
WARNING: psql major version 9.5, server major version 9.6.
         Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> 

There is an additional database called “azure_maintenance” and we are not allowed to connect there:

postgres=> \l
                                                               List of databases
       Name        |      Owner      | Encoding |          Collate           |           Ctype            |          Access privileges          
-------------------+-----------------+----------+----------------------------+----------------------------+-------------------------------------
 azure_maintenance | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | azure_superuser=CTc/azure_superuser
 postgres          | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | 
 template0         | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | =c/azure_superuser                 +
                   |                 |          |                            |                            | azure_superuser=CTc/azure_superuser
 template1         | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | =c/azure_superuser                 +
                   |                 |          |                            |                            | azure_superuser=CTc/azure_superuser
(4 rows)
postgres=> \c azure_maintenance
FATAL:  permission denied for database "azure_maintenance"
DETAIL:  User does not have CONNECT privilege.
Previous connection kept

The minor release is one release behind but as the latest minor release was released this week that seems to be fine:

postgres=> select version();
                           version                           
-------------------------------------------------------------
 PostgreSQL 9.6.9, compiled by Visual C++ build 1800, 64-bit
(1 row)

postgres=> 

I would probably not compile PostgreSQL with “Visual C++” but given that we use a Microsoft product, surprise, we are running on Windows:

postgres=> select name,setting from pg_settings where name = 'archive_azure_location';
          name          |          setting          
------------------------+---------------------------
 archive_azure_location | c:\BackupShareDir\Archive
(1 row)

… and the PostgreSQL source code was modified as this parameter does not exist in the community version.

Access to the server logs is quite easy:

dwe@dwe:~$ az postgres server-logs list --resource-group PGTEST --server-name mymanagedpg1
[
  {
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/logFiles/postgresql-2018-08-13_122334.log",
    "lastModifiedTime": "2018-08-13T12:59:26+00:00",
    "logFileType": "text",
    "name": "postgresql-2018-08-13_122334.log",
    "resourceGroup": "PGTEST",
    "sizeInKb": 6,
    "type": "Microsoft.DBforPostgreSQL/servers/logFiles",
    "url": "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_122334.log?sv=2015-04-05&sr=f&sig=ojGG2km5NFrfQ8dJ0btz8bhmwNMe0F7oq0iTRum%2FjJ4%3D&se=2018-08-13T14%3A06%3A16Z&sp=r"
  },
  {
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/logFiles/postgresql-2018-08-13_130000.log",
    "lastModifiedTime": "2018-08-13T13:00:00+00:00",
    "logFileType": "text",
    "name": "postgresql-2018-08-13_130000.log",
    "resourceGroup": "PGTEST",
    "sizeInKb": 0,
    "type": "Microsoft.DBforPostgreSQL/servers/logFiles",
    "url": "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_130000.log?sv=2015-04-05&sr=f&sig=k8avZ62KyLN8RW0ZcIigyPZa40EKNBJvNvneViHjyeI%3D&se=2018-08-13T14%3A06%3A16Z&sp=r"
  }
]

We can just download the logs and have a look at them:

dwe@dwe:~$ wget "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_122334.log?sv=2015-04-05&sr=f&sig=Mzy2dQ%2BgRPY8lfkUAP5X%2FkXSxoxWSwrphy7BphaTjLk%3D&se=2018-08-13T14%3A07%3A29Z&sp=r" 
we@dwe:~$ more postgresql-2018-08-13_122334.log\?sv\=2015-04-05\&sr\=f\&sig\=Mzy2dQ%2BgRPY8lfkUAP5X%2FkXSxoxWSwrphy7BphaTjLk%3D\&se\=2018-08-13T14%3A07%3A29Z\&sp\=r
2018-08-13 12:23:34 UTC-5b717845.6c-LOG:  could not bind IPv6 socket: A socket operation was attempted to an unreachable host.
	
2018-08-13 12:23:34 UTC-5b717845.6c-HINT:  Is another postmaster already running on port 20686? If not, wait a few seconds and retry.
2018-08-13 12:23:34 UTC-5b717846.78-LOG:  database system was shut down at 2018-08-13 12:23:32 UTC
2018-08-13 12:23:35 UTC-5b717846.78-LOG:  database startup complete in 1 seconds, startup began 2 seconds after last stop
...

The PostgreSQL configuration is accessible quite easy:

dwe@dwe:~$ az postgres server configuration list --resource-group PGTEST --server-name mymanagedpg1 | head -20
[
  {
    "allowedValues": "on,off",
    "dataType": "Boolean",
    "defaultValue": "on",
    "description": "Enable input of NULL elements in arrays.",
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/array_nulls",
    "name": "array_nulls",
    "resourceGroup": "PGTEST",
    "source": "system-default",
    "type": "Microsoft.DBforPostgreSQL/servers/configurations",
    "value": "on"
  },
  {
    "allowedValues": "safe_encoding,on,off",
    "dataType": "Enumeration",
    "defaultValue": "safe_encoding",
    "description": "Sets whether \"\\'\" is allowed in string literals.",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/backslash_quote",
    "name": "backslash_quote",

Setting a parameter is easy as well:

dwe@dwe:~$ az postgres server configuration set --name work_mem --value=32 --resource-group PGTEST --server-name mymanagedpg1
Deployment failed. Correlation ID: 634fd473-0c28-43a7-946e-ecbb26faf961. The value '32' for configuration 'work_mem' is not valid. The allowed values are '4096-2097151'.
dwe@dwe:~$ az postgres server configuration set --name work_mem --value=4096 --resource-group PGTEST --server-name mymanagedpg1
{
  "allowedValues": "4096-2097151",
  "dataType": "Integer",
  "defaultValue": "4096",
  "description": "Sets the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files.",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/work_mem",
  "name": "work_mem",
  "resourceGroup": "PGTEST",
  "source": "system-default",
  "type": "Microsoft.DBforPostgreSQL/servers/configurations",
  "value": "4096"
}

The interesting point is what happens when we change a parameter that requires a restart:

dwe@dwe:~$ az postgres server configuration set --name shared_buffers --value=4096 --resource-group PGTEST --server-name mymanagedpg1
Deployment failed. Correlation ID: d849b302-1c41-4b13-a2d5-6b24f144be89. The configuration 'shared_buffers' does not exist for PostgreSQL server version 9.6.
dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1 postgresPassword for user postgres@mymanagedpg1: 
psql (9.5.13, server 9.6.9)
WARNING: psql major version 9.5, server major version 9.6.
         Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> show shared_buffers ;
 shared_buffers 
----------------
 512MB
(1 row)
postgres=> 

So memory configuration depends on the pricing models, more information here. If you want to scale up or down “you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period”.

Some final thoughts: Bringing an instance up is quite easy and simple. The default PostgreSQL version is 9.6.x, which is not a good choice in my opinion, version 10 already got the 5th minor release and is stable and the most recent version. Scaling up and down is a matter of changing basic stuff such as cores, memory, storage and pricing models. For many workloads this is probably fine, if you want to have more control you’d do better in provisioning VMs and then do the PostgreSQL stuff for your own. High availability is not implemented by adding replicas but by creating new nodes, attaching the storage to that node and then bring it up. This might be sufficient, it might be not, depends on your requirements.

In a next post we will build our own PostgreSQL HA solution on Azure.

 

Cet article Using the managed PostgreSQL service in Azure est apparu en premier sur Blog dbi services.

ORACLE_HOME with symbolic link and postupgrade_fixups

Mon, 2018-08-13 07:26

Here is a quick post you may google into if you got the following error when running postupgrade_fixups.sql after an upgrade:

ERROR - Cannot open the preupgrade_messages.properties file from the directory object preupgrade_dir
DECLARE
*
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.DBMS_PREUP", line 3300
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at "SYS.DBMS_PREUP", line 3260
ORA-06512: at "SYS.DBMS_PREUP", line 9739
ORA-06512: at line 11


Before upgrading a database with dbupgrade, you run, on the current version of your database, the preupgrade.jar from the new version (and probably download the lastest one from MOS). This generates a script to run before the upgrade, and one to run after the upgrade. Those scripts are generated under $ORACLE_BASE/cfgtoollogs/<database>/preupgrade where you find something like that:

drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 ..
drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 oracle
drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 upgrade
-rw-r--r--. 1 oracle oinstall 14846 Aug 11 20:19 dbms_registry_extended.sql
-rw-r--r--. 1 oracle oinstall 7963 Aug 11 20:19 preupgrade_driver.sql
-rw-r--r--. 1 oracle oinstall 422048 Aug 11 20:19 preupgrade_package.sql
-rw-r--r--. 1 oracle oinstall 14383 Aug 11 20:19 parameters.properties
-rw-r--r--. 1 oracle oinstall 83854 Aug 11 20:19 preupgrade_messages.properties
-rw-r--r--. 1 oracle oinstall 50172 Aug 11 20:19 components.properties
-rw-r--r--. 1 oracle oinstall 2 Aug 11 20:19 checksBuffer.tmp
-rw-r--r--. 1 oracle oinstall 6492 Aug 11 20:20 preupgrade_fixups.sql
-rw-r--r--. 1 oracle oinstall 7575 Aug 11 20:20 postupgrade_fixups.sql
-rw-r--r--. 1 oracle oinstall 5587 Aug 11 20:20 preupgrade.log

Everything is straightforward.

oracle@vmreforatun01:/u00/app/oracle/product/ [DB2] java -jar /u00/app/oracle/product/18EE/rdbms/admin/preupgrade.jar
...
==================
PREUPGRADE SUMMARY
==================
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade.log
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade_fixups.sql
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 
Execute fixup scripts as indicated below:
 
Before upgrade log into the database and execute the preupgrade fixups
@/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade_fixups.sql
 
After the upgrade:
 
Log into the database and execute the postupgrade fixups
@/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 
Preupgrade complete: 2018-08-11T19:37:29
oracle@vmreforatun01:/u00/app/oracle/product/ [DB2]

For a database we have in a lab for our workshops, which I upgraded to 18c, I’ve run the postfix script after the upgrade but got the error mentioned above about UTL_FILE invalid file operation in the preupgrade_dir. I looked at the script. The postupgrade_fixups.sql script creates a directory on $ORACLE_HOME/rdbms/admin and calls preupgrade_package.sql which reads preupgrade_messages.properties.

This is a bit confusing because there’s also the same file in the cfgtoollogs preupgrade subdirectory but my directory looks good:

SQL> select directory_name,directory_path from dba_directories where directory_name='PREUPGRADE_DIR';
 
DIRECTORY_NAME
--------------------------------------------------------------------------------------------------------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
PREUPGRADE_DIR
/u00/app/oracle/product/18SE/rdbms/admin

So, as the “ORA-29283: invalid file operation” is not very detailed, I traced all the system calls on files (strace -fye trace=file) when running sqlplus and got this:

[pid 29974] clock_gettime(CLOCK_MONOTONIC, {17811, 723389136}) = 0
[pid 29974] stat("/u00/app/oracle/product/18SE/rdbms/admin/preupgrade_messages.properties", {st_mode=S_IFREG|0644, st_size=83854, ...}) = 0
[pid 29974] stat("/u00/app/oracle/product/18SE/rdbms/admin/", {st_mode=S_IFDIR|0755, st_size=65536, ...}) = 0
[pid 29974] lstat("/u00", {st_mode=S_IFLNK|0777, st_size=11, ...}) = 0
[pid 29974] readlink("/u00", "/oracle/u00", 4095) = 11
[pid 29974] lstat("/oracle", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE/rdbms", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE/rdbms/admin", {st_mode=S_IFDIR|0755, st_size=65536, ...}) = 0
[pid 29974] clock_gettime(CLOCK_MONOTONIC, {17811, 724514469}) = 0

Then I realized that the ORACLE_HOME is under a symbolic link. For whatever reason, on this environment, ORACLE_BASE is physically /oracle/u00/app/oracle but there’s a /u00 link to /oracle/u00 and this short one was used to set the environment variables. UTL_FILE, since 11g, and for security reasons, does not accept directories which use a symbolic link. And we can see on the strace above that it was detected (readlink).

So, the solution can be a quick workaround here, changing the postupgrade_fixups.sql to set the physical path instead of the one read from ORACLE_HOME by dbms_system.get_env.

However, if you can restart the instance, then it will be better to set the ORACLE_HOME to the physical path. Symbolic links for the ORACLE_HOME may be misleading. Remember that the ORACLE_HOME text string is part of the instance identification, combined with ORACLE_SID. So, having different values even when resolved to the same path will bring lot of problems. Do not forget to change it everywhere (shell environment, listener.ora) so that you are sure that nobody will use a different one when starting the database.

 

Cet article ORACLE_HOME with symbolic link and postupgrade_fixups est apparu en premier sur Blog dbi services.

Bringing up your customized PostgreSQL instance on Azure

Mon, 2018-08-13 06:53

The Azure cloud becomes more and more popular so I gave it try and started simple. The goal was to provision a VM, compiling and installing PostgreSQL and then connecting to the instance. There is also a fully managed PostgreSQL service but I wanted to do it on my own just to get a feeling about the command line tools. Here is how I’ve done it.

Obviously you need to login which is just a matter of this:

dwe@dwe:~$ cd /var/tmp
dwe@dwe:/var/tmp$ az login

For doing anything in Azure you’ll need to create a resource group which is like container holding your resources. As a resource group needs to be created in a specific location the next step is to get a list of those:

dwe@dwe:/var/tmp$ az account list-locations
[
  {
    "displayName": "East Asia",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/locations/eastasia",
    "latitude": "22.267",
    "longitude": "114.188",
    "name": "eastasia",
    "subscriptionId": null
  },
  {
    "displayName": "Southeast Asia",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/locations/southeastasia",
    "latitude": "1.283",
    "longitude": "103.833",
    "name": "southeastasia",
    "subscriptionId": null
  },
...

Once you have selected a location the resource group can be created:

dwe@dwe:/var/tmp$ az group create --name PGTEST --location "westeurope"
{
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST",
  "location": "westeurope",
  "managedBy": null,
  "name": "PGTEST",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}

All you need to do for creating a CentOS VM is this simple command:

dwe@dwe:/var/tmp$ az vm create -n MyPg -g PGTEST --image centos --data-disk-sizes-gb 10 --size Standard_DS2_v2 --generate-ssh-keys
{
  "fqdns": "",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg",
  "location": "westeurope",
  "macAddress": "xx-xx-xx-xx-xx-xx",
  "powerState": "VM running",
  "privateIpAddress": "x.x.x.x",
  "publicIpAddress": "x.x.x.x",
  "resourceGroup": "PGTEST",
  "zones": ""
}

While the VM is getting created you can watch the resources appearing in the portal:
Selection_026
Selection_027
Selection_028

As soon as the VM is ready connecting via ssh is possible (the keys have automatically been added, no password required):

dwe@dwe:/var/tmp$ ssh x.x.x.x
The authenticity of host 'xx.xx.x.x (xx.xx.x.x)' can't be established.
ECDSA key fingerprint is SHA256:YzNOzg30JH0A3U1R+6WzuJEd3+7N4GmwpSVkznhuTuE.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xx.xx.x.x' (ECDSA) to the list of known hosts.
[dwe@MyPg ~]$ ls -la /etc/yum.repos.d/
total 44
drwxr-xr-x.  2 root root  209 Sep 25  2017 .
drwxr-xr-x. 86 root root 8192 Aug  2 08:05 ..
-rw-r--r--.  1 root root 1706 Sep 25  2017 CentOS-Base.repo
-rw-r--r--.  1 root root 1309 Nov 29  2016 CentOS-CR.repo
-rw-r--r--.  1 root root  649 Nov 29  2016 CentOS-Debuginfo.repo
-rw-r--r--.  1 root root  314 Nov 29  2016 CentOS-fasttrack.repo
-rw-r--r--.  1 root root  630 Nov 29  2016 CentOS-Media.repo
-rw-r--r--.  1 root root 1331 Nov 29  2016 CentOS-Sources.repo
-rw-r--r--.  1 root root 2893 Nov 29  2016 CentOS-Vault.repo
-rw-r--r--.  1 root root  282 Sep 25  2017 OpenLogic.repo
[dwe@MyPg ~]$ sudo su -
[root@MyPg ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core) 
[root@MyPg ~]# 

Of course we want to update all the operating system packages to the latest release before moving on. Be careful here to really exclude the WALinuxAgent because otherwise the agent will be upgraded as well (and restarted) and the script execution will fail as you lose connectivity:

dwe@dwe:/var/tmp$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"yum update -y --exclude=WALinuxAgent"}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "yum update -y --exclude=WALinuxAgent"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

When we want to compile PostgreSQL we need some packages for that, so (not all of them required for compiling PostgreSQL but this is what we usually install):

dwe@dwe:/var/tmp$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel openssh-clients bzip2 net-tools wget screen unzip sysstat xorg-x11-xauth systemd-devel bash-completion"}'

{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel openssh-clients bzip2 net-tools wget screen unzip sysstat xorg-x11-xauth systemd-devel bash-completion"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Preparation work for the user, group and directories:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"groupadd postgres; useradd -m -g postgres postgres; mkdir -p /u01/app; chown postgres:postgres /u01/app; mkdir -p /u02/pgdata; chown postgres:postgres /u02/pgdata"}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "groupadd postgres; useradd -m -g postgres postgres; mkdir -p /u01/app; chown postgres:postgres /u01/app; mkdir -p /u02/pgdata; chown postgres:postgres /u02/pgdata"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

For the next steps we will just copy over this script and then execute it:

dwe@dwe:~$ cat installPG.sh 
#!/bin/bash
cd /u01/app; wget https://ftp.postgresql.org/pub/source/v10.5/postgresql-10.5.tar.bz2
tar -axf postgresql-10.5.tar.bz2
rm -f postgresql-10.5.tar.bz2
cd postgresql-10.5
PGHOME=/u01/app/postgres/product/10/db_5/
SEGSIZE=2
BLOCKSIZE=8
WALSEGSIZE=16
./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
	    --with-systemd 
make -j 4 all
make install
cd contrib
make -j 4 install

dwe@dwe:~$ scp installPG.sh x.x.x.x:/var/tmp/
installPG.sh                                                                                                100% 1111     1.1KB/s   00:00    

Of course you could also add the yum commands to the same script but I wanted to show both ways. Using the CustomScript feature and copying over a script for execution. Lets execute that:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"chmod +x /var/tmp/installPG.sh; sudo su - postgres -c /var/tmp/installPG.sh"}'

  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "chmod +x /var/tmp/installPG.sh; sudo su - postgres -c /var/tmp/installPG.sh"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Binaries ready. Initialize the cluster:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/initdb -D /u02/pgdata/PG1\""}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/initdb -D /u02/pgdata/PG1\""
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Startup:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1 start\""}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx030698d5-42d6-41a1-8740-355649c409e7/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1 start\""
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

… and the instance is up and running:

dwe@dwe:~$ ssh x.x.x.x
Last login: Mon Aug 13 10:43:53 2018 from ip-37-201-6-36.hsi13.unitymediagroup.de
[dwe@MyPg ~]$ sudo su - postgres
Last login: Mon Aug 13 11:33:52 UTC 2018 on pts/0
[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/psql -c 'select version()'
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)
[postgres@MyPg ~]$ 

When you want to access this instance from outside Azure you will need to open the port:

dwe@dwe:~$ az vm open-port --resource-group PGTEST --name MyPg --port 5432

Once you have configured PostgreSQL for accepting connections:

[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/psql
psql (10.5)
Type "help" for help.

postgres=# alter system set listen_addresses = '*';
ALTER SYSTEM
postgres=# alter user postgres password 'secret';
ALTER ROLE
postgres=# show port ;
 port 
------
 5432
(1 row)

postgres=# \q
[postgres@MyPg ~]$ echo "host    all             all             37.201.6.36/32   md5" >> /u02/pgdata/PG1/pg_hba.conf 
[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1/ restart

… you can access the instance from your outside Azure:

dwe@dwe:~$ psql -h 137.117.157.183 -U postgres
Password for user postgres: 
psql (9.5.13, server 10.5)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.

postgres=# 

Put all that into a well written script and you can have your customized PostgreSQL instance ready in Azure in a couple of minutes. Now that I have a feeling on how that works in general I’ll look into the managed PostgreSQL service in another post.

 

Cet article Bringing up your customized PostgreSQL instance on Azure est apparu en premier sur Blog dbi services.

Documentum – Silent Install – D2

Sun, 2018-08-12 07:50

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed) as well as several repositories. In this one, we will see how to install D2 on a predefined list of docbases/repositories (on the Content Server side) and you will see that, here, the process is quite different.

D2 is supporting the silent installation since quite some time now and it is pretty easy to do. At the end of the D2 GUI Installer, there is a screen where you are asked if you want to generate a silent properties (response) file containing the information that have been set in the D2 GUI Installer. Therefore, this is a first way to start working with silent installation or you can just read this blog ;).

So, let’s start this with the preparation of a template file. I will use a lot of placeholders in the template and will replace the values with sed commands, just as a quick look at how you can script a silent installation with a template configuration file and some properties prepared before.

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/D2_template.xml
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/D2_template.xml
<?xml version="1.0" encoding="UTF-8"?>
<AutomatedInstallation langpack="eng">
  <com.izforge.izpack.panels.HTMLHelloPanel id="welcome"/>
  <com.izforge.izpack.panels.UserInputPanel id="SelectInstallOrMergeConfig">
    <userInput>
      <entry key="InstallD2" value="true"/>
      <entry key="MergeConfigs" value="false"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.HTMLInfoPanel id="readme"/>
  <com.izforge.izpack.panels.PacksPanel id="UNKNOWN (com.izforge.izpack.panels.PacksPanel)">
    <pack index="0" name="Installer files" selected="true"/>
    <pack index="1" name="D2" selected="###WAR_REQUIRED###"/>
    <pack index="2" name="D2-Config" selected="###WAR_REQUIRED###"/>
    <pack index="3" name="D2-API for Content Server/JMS" selected="true"/>
    <pack index="4" name="D2-API for BPM" selected="###BPM_REQUIRED###"/>
    <pack index="5" name="DAR" selected="###DAR_REQUIRED###"/>
  </com.izforge.izpack.panels.PacksPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.0">
    <userInput>
      <entry key="jboss5XCompliant" value="false"/>
      <entry key="webappsDir" value="###DOCUMENTUM###/D2-Install/war"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.2">
    <userInput>
      <entry key="pluginInstaller" value="###PLUGIN_LIST###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.3">
    <userInput>
      <entry key="csDir" value="###DOCUMENTUM###/D2-Install/D2-API"/>
      <entry key="bpmDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/bpm.ear"/>
      <entry key="jmsDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/ServerApps.ear"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.4">
    <userInput>
      <entry key="installationDir" value="###DOCUMENTUM###/D2-Install/DAR"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.5">
    <userInput>
      <entry key="dfsDir" value="/tmp/###DFS_SDK_PACKAGE###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.7">
    <userInput>
      <entry key="COMMON.USER_ACCOUNT" value="###INSTALL_OWNER###"/>
      <entry key="install.owner.password" value="###INSTALL_OWNER_PASSWD###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.8">
    <userInput>
      <entry key="SERVER.REPOSITORIES.NAMES" value="###DOCBASE_LIST###"/>
      <entry key="setReturnRepeatingValue" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.9">
    <userInput>
      <entry key="securityRadioSelection" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPD2ConfigOrClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseUsetheSameDFC">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableDFCSessionPool">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDFCKeyStoreInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetD2ConfigLanguage">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableD2BOCS">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetHideDomainforConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTemporaryMaxFiles">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="10">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="11">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfoForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="12">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="13">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="14">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="15">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="16">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="17">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="18">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="19">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="20">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="21">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="22">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTransferMode">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="24">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="25">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableAuditing">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPchooseWebAppServer">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskWebappsDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskNewWarDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.InstallPanel id="UNKNOWN (com.izforge.izpack.panels.InstallPanel)"/>
  <com.izforge.izpack.panels.XInfoPanel id="UNKNOWN (com.izforge.izpack.panels.XInfoPanel)"/>
  <com.izforge.izpack.panels.FinishPanel id="UNKNOWN (com.izforge.izpack.panels.FinishPanel)"/>
</AutomatedInstallation>

[dmadmin@content_server_01 ~]$

 

As you probably understood by looking at the above file, I’m using “/tmp/” for the input elements needed by D2 like the DFS package, the D2 installer or the D2+Pack Plugins and I’m using “$DOCUMENTUM/D2-Install” as the output folder where D2 generates its stuff into.

Once you have the template ready, you can replace the placeholders as follow (this is just an example of configuration based on the other silent blogs I wrote so far):

[dmadmin@content_server_01 ~]$ export d2_install_file=/tmp/dctm_install/D2.xml
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cp /tmp/dctm_install/D2_template.xml ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###WAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###BPM_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###DAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DOCUMENTUM###,$DOCUMENTUM," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###PLUGIN_LIST###,/tmp/D2_pluspack_4.7.0.P18/Plugins/C2-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/D2-Bin-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/O2-Install-4.7.0.jar;," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###JMS_HOME###,$DOCUMENTUM_SHARED/wildfly9.0.1," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DFS_SDK_PACKAGE###,emc-dfs-sdk-7.3," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ read -s -p "  ----> Please enter the Install Owner's password: " dm_pw; echo; echo
  ----> Please enter the Install Owner's password: <TYPE HERE THE PASSWORD>
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER###,dmadmin," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER_PASSWD###,${dm_pw}," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s/###DOCBASE_LIST###/Docbase1/" ${d2_install_file}
[dmadmin@content_server_01 ~]$

 

A short description of these properties as well as some notes on the values used above:

  • langpack: The language you are usually using for running the installers… English is fine if you use this template
  • entry key=”InstallD2″: Whether or not you want to install D2
  • entry key=”MergeConfigs”: Whether or not you want to merge the actual configuration/installation with the new one. I’m always restarting a D2 installation from scratch (removing the D2 hidden files for that) so I always set this to false
  • pack index=”0″ name=”Installer files”: Always set this to true to install D2 on a CS
  • pack index=”1″ name=”D2″: Whether or not you want to generate the D2 WAR file. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • pack index=”2″ name=”D2-Config”: Same as above but for the D2-Config WAR file
  • pack index=”3″ name=”D2-API for Content Server/JMS”: Whether or not you want the D2 Installer to put the D2 specific libraries into the JMS lib folder (path defined in: entry key=”jmsDir”). Even if you set this to true, you will still need to manually put a lot of D2 libs into the JMS lib folder because D2 only put a few of them but much more are required to run D2 properly (see documentation for the full list)
  • pack index=”4″ name=”D2-API for BPM”: Same as above but for the BPM this time (path defined in: entry key=”bpmDir”)
  • pack index=”5″ name=”DAR”: Whether or not you want to generate the DARs. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • entry key=”jboss5XCompliant”: I guess this is for the JBoss 5 support so if you are on Dctm 7.x, leave this as false
  • entry key=”webappsDir”: The path the D2 Installer will put the generated WAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/war” so this folder MUST exist before running the installer in silent
  • entry key=”pluginInstaller”: This one is a little bit trickier… It’s a semi-colon list of all D2+Pack Plugins you would like to install in addition to the D2. In the above, I’m using the C2, D2-Bin as well as O2 plugins. The D2+Pack package must obviously be extracted BEFORE running the installer in silent and all the paths MUST exist (you will need to extract the plugins jar from each plugin zip files). I opened a few bugs & enhancements requests for these so if you are facing an issue, let me know, I might be able to help you
  • entry key=”csDir”: The path the D2 Installer will put the generated libraries into. In this example, I set it to “$DOCUMENTUM/D2-Install/D2-API” so this folder MUST exist before running the installer in silent
  • entry key=”bpmDir”: The path the D2 Installer will put a few of the D2 libraries into for the BPM (it’s not all needed JARs and this parameter is obviously not needed if you set ###BPM_REQUIRED### to false)
  • entry key=”jmsDir”: Same as above but for the JMS this time
  • entry key=”installationDir”: The path the D2 Installer will put the generated DAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/DAR” so this folder MUST exist before running the installer in silent
  • entry key=”dfsDir”: The path where the DFS SDK can be found. The DFS SDK package MUST be extracted in this folder before running the installer in silent
  • entry key=”COMMON.USER_ACCOUNT”: The name of the Documentum Installation Owner
  • entry key=”install.owner.password”: The password of the Documentum Installation Owner. I used above a “read -s” command so it doesn’t appear on the command line, but it will be put in clear text in the xml file…
  • entry key=”SERVER.REPOSITORIES.NAMES”: A comma separated list of all docbases/repositories (without spaces) that need to be configured for D2. The DARs will be installed automatically on these docbases/repositories and if you want to do it properly, it mustn’t contain the GR. You could potentially add the GR in this parameter but all D2 DARs would be installed into the GR and this isn’t needed… Only the “D2-DAR.dar” and “Collaboration_Services.dar” are needed to be installed on the GR so I only add normal docbases/repositories in this parameter and once D2 is installed, I manually deploy these two DARs into the GR (I wrote a blog about deploying DARs easily to a docbase a few years ago if you are interested). So, here I have a value of “Docbase1″ but if you had two, you could set it to “Docbase1,Docbase2″
  • entry key=”setReturnRepeatingValue”: Whether or not you want the repeating values. A value of true should set the “return_top_results_row_based=false” in the server.ini
  • entry key=”securityRadioSelection”: A value of true means that D2 have to apply Security Rules to content BEFORE applying AutoLink and a value of false means that D2 can do it AFTER only
  • That’s the end of this file because I’m using D2 4.7 and in D2 4.7, there is no Lockbox anymore! If you are using previous D2 versions, you will need to put additional parameters for the D2 Lockbox generation, location, password, aso…

 

Once the properties file is ready, you can install the docbroker/connection broker using the following command:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -DTRACE=true -DDEBUG=true -Djava.io.tmpdir=$DOCUMENTUM/D2-Install/tmp -jar /tmp/D2_4.7.0_P18/D2-Installer-4.7.0.jar ${d2_install_file}

 

You now know how to install D2 on a Content Server using the silent installation provided by D2. As you saw above, it is quite different compared to all Documentum components silent installation, but it is working so… Maybe at some point in the future, D2 will switch to use the same kind of properties file as Documentum.

 

Cet article Documentum – Silent Install – D2 est apparu en premier sur Blog dbi services.

18c runInstaller -silent

Sun, 2018-08-12 02:34

You find two different ‘runInstaller’ under an Oracle Home. The old one, the Oracle Universal Installer, in $ORACLE_HOME/oui/bin. And the new one, in $ORACLE_HOME directly. They have the same name but are completely different. The old one was used to install an Oracle Home from the installation media. But in 18c you don’t use it. It has been used by Oracle to build the Oracle Home image. Then you download and unzip directly your Oracle Home. You have only to configure it and re-link the binaries. And this is done by the new runInstaller which is at the root of the Oracle Home. Actually, it is just a shell script that runs the Perl dbSetup.pl to setup the Oracle Database software. In my opinion, it would be better to have it called dbSetup.sh rather than rename it to runInstaller, especially given that the same thing for Grid Infrastructure is called GridSetup.sh since 12cR2. The Perl script finally runs the Java GUI. It can also be run in command line, aka silent mode, which is the goal of this post. The command line arguments are similar, but not the same as in the old runInstaller.

Prerequisites

You may want to run the prerequisites only to check if your system is ready for the installation. Here is how to do so in command line:

$ $ORACLE_HOME/runInstaller -silent -executePrereqs -responseFile $ORACLE_HOME/inventory/response/db_install.rsp
 
Launching Oracle Database Setup Wizard...
 
[FATAL] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

From there we can check the log about the tests that have failed, such as in the following example:

INFO: [Aug 11, 2018 6:08:21 PM] Physical Memory: This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:IGNORABLE
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] Run Level: This is a prerequisite condition to test whether the system is running with proper run level.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] OS Kernel Version: This is a prerequisite condition to test whether the system kernel version is at least "2.6.39-400.211.1".
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED

Software Install

You can pass all parameters in command line (‘runInstaller -silent -help’ to see all possibilities), but in all cases you need a response file. Then I put everything I need in the response file. There’s no mention of the ORACLE_HOME because you already unzipped it at the right place. The most important is the edition which seems to accept [EE, SEONE, SE2, HP, XP, PE]. I didn’t try it but Standard Edition One is for versions <= 12.1.0.1 by the way.

cd $ORACLE_HOME
 
cat > db18EE.rsp <<END
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v18.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u00/app/oraInventory
ORACLE_BASE=/u00/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=dba
END

There is no need for the oracle.install.db.config variables because I’ll install the software only without creating a database.

Here is how to run the dbSetup. You can use ‘-ignorePrereqFailure’ to ignore the prerequisites if you want to install to a host where some prerequisites fail:

./runInstaller -silent -noconfig -ignorePrereqFailure -responseFile ./db18EE.rsp

 
The log of the installation goes into the oraInventory/logs and, as usual, you have to run the root.sh

As a root user, execute the following script(s):
1. /u00/app/oracle/product/18SE/root.sh
 
Execute /u00/app/oracle/product/18SE/root.sh on the following nodes:
[vmreforatun01]

This new runInstaller can also apply one-off patches with -applyOneOffs mentioning the patch locations. You can also build an Oracle Home image that you customize, with -createGoldImage -destinationLocation and even mention some files or path to exclude to make it smaller: -exclFiles

 

Cet article 18c runInstaller -silent est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Docbases/Repositories

Sat, 2018-08-11 23:31

In previous blogs, we installed in silent the Documentum binaries as well as a docbroker (+licence(s) if needed). In this one, we will see how to install docbases/repositories and by that, I mean either a Global Registry (GR) repository or a normal repository.

As you all know, you will need a repository to be a GR and I would always recommend to setup a GR that isn’t used by the end-users (no real documents). That’s why I will split this blog into two: the installation of a GR and then, the installation of a normal repository that will be used by end-users. So, let’s get to it.

 

1. Documentum Global Registry repository installation

The properties file for a GR installation is as follow (it’s a big one):

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbase_GR.properties
### Silent installation response file for a Docbase (GR)
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=true
SERVER.CONFIGURATOR.BROKER=false

### Docbase parameters
SERVER.DOCBASE_ACTION=CREATE

common.use.existing.aek.lockbox=common.create.new
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS=false
SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_01.dbi-services.com

SERVER.DOCBASE_NAME=gr_docbase
SERVER.DOCBASE_ID=1010101
SERVER.DOCBASE_DESCRIPTION=Global Registry repository for silent install blog

SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.TEST_DOCBROKER=true
SERVER.CONNECT_MODE=dual

SERVER.USE_EXISTING_DATABASE_ACCOUNT=true
SERVER.INDEXSPACE_NAME=DM_GR_DOCBASE_INDEX
SERVER.DATABASE_CONNECTION=DEMODBNAME
SERVER.DATABASE_ADMIN_NAME=gr_docbase
SERVER.SECURE.DATABASE_ADMIN_PASSWORD=gr_d0cb4seP4ssw0rdDB
SERVER.DOCBASE_OWNER_NAME=gr_docbase
SERVER.SECURE.DOCBASE_OWNER_PASSWORD=gr_d0cb4seP4ssw0rdDB
SERVER.DOCBASE_SERVICE_NAME=gr_docbase

SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION=USE_THIS_REPOSITORY
SERVER.BOF_REGISTRY_USER_LOGIN_NAME=dm_bof_registry
SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=dm_b0f_reg1s7ryP4ssw0rd

### Common parameters
SERVER.ENABLE_XHIVE=false
SERVER.CONFIGURATOR.DISTRIBUTED_ENV=false
SERVER.ENABLE_RKM=false
START_METHOD_SERVER=false
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$

 

In the above commands, I didn’t put the SERVER.DOCUMENTUM_DATA and SERVER.DOCUMENTUM_SHARE into the file directly but I used sed commands to update the file later because I didn’t want to direct you to use a certain path for your installation like /app or /opt or /var or whatever… This choice is yours, so I just used sub-folders of $DOCUMENTUM and used this environment variable to set both parameters so you can choose which path you want for the Data and Share folder (the above is the default but you can set what you want).

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.CONFIGURATOR.LICENSING: Whether or not you want to configure a licence using this properties file. Here since we just want a docbase/repository, it is obviously false
  • SERVER.CONFIGURATOR.REPOSITORY: Whether or not you want to configure a docbase/repository. That’s the purpose of this properties file so it will be true
  • SERVER.CONFIGURATOR.BROKER: Whether or not you want to configure a docbroker/connection broker. Same as the licence, it will be false
  • SERVER.DOCBASE_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • common.use.existing.aek.lockbox: Whether to use an existing aek or create a new one. Possible values are “common.create.new” or “common.use.existing”. In this case, it is the first docbase/repository created so we are creating a new one. In case of migration/upgrade, you might want to use an existing one (after upgrading it) …
  • common.aek.passphrase.password: The password to be used for the AEK
  • common.aek.key.name: The name of the AEK key to be used. This is usually something like “CSaek”
  • common.aek.algorithm: The algorithm to be used for the AEK key. I would recommend the strongest one, if possible: “AES_256_CBC”
  • SERVER.ENABLE_LOCKBOX: Whether or not you want to use a Lockbox to protect the AEK key. If set to true, a lockbox will be created and the AEK key will be stored in it
  • SERVER.LOCKBOX_FILE_NAME: The name of the Lockbox to be used. This is usually something like “lockbox.lb”
  • SERVER.LOCKBOX_PASSPHRASE.PASSWORD: The password to be used for the Lockbox
  • SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS: Whether or not the “SERVER.DOCUMENTUM_DATA” and “SERVER.DOCUMENTUM_SHARE” are using a SAN or NAS path
  • SERVER.DOCUMENTUM_DATA: The path to be used to store the Documentum documents, accessible from all Content Servers which will host this docbase/repository
  • SERVER.DOCUMENTUM_SHARE: The path to be used for the share folder
  • SERVER.FQDN: The Fully Qualified Domain Name of the current host the docbase/repository is being installed on
  • SERVER.DOCBASE_NAME: The name of the docbase/repository to be created (dm_docbase_config.object_name)
  • SERVER.DOCBASE_ID: The ID of the docbase/repository to be created
  • SERVER.DOCBASE_DESCRIPTION: The description of the docbase/repository to be created (dm_docbase_config.title)
  • SERVER.PROJECTED_DOCBROKER_HOST: The hostname to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.PROJECTED_DOCBROKER_PORT: The port to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.TEST_DOCBROKER: Whether or not you want to test the docbroker/connection broker connection during the installation. I would recommand to always set this to true to be sure the docbase/repository is installed properly… If a docbroker/connection broker isn’t available, the installation will not be complete (DARs installation for example) but you will not see any error, unless you manually check the installation log…
  • SERVER.CONNECT_MODE: The connection mode of the docbase/repository to be used (dm_server_config.secure_connect_mode), it can be either native, dual or secure. If it is dual or secure, you have 2 choices:
    • Use the default “Anonymous” mode, which is actually not really secure
    • Use a real “SSL Certificate” mode, which requires some more parameters to be configured:
      • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbase/repository
      • SERVER.KEYSTORE_FILE_NAME: The name of the p12 file that contains the keystore
      • SERVER.KEYSTORE_PASSWORD_FILE_NAME: The name of the password file that contains the password of the keystore
      • SERVER.TRUST_STORE_FILE_NAME: The name of the p7b file that contains the SSL Certificate needed to trust the targets (from a docbase point of view)
      • SERVER.CIPHER_LIST: Colon separated list of ciphers to be enabled (E.g.: EDH-RSA-AES256-GCM-SHA384:EDH-RSA-AES256-SHA)
      • SERVER.DFC_SSL_TRUSTSTORE: Full path and name of the truststore to be used that contains the SSL Certificate needed to trust the targets (from a DFC/client point of view)
      • SERVER.DFC_SSL_TRUSTSTORE_PASSWORD: The password of the truststore in clear text
      • SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE: Whether or not to use the Java truststore or the 2 above parameters instead
  • SERVER.USE_EXISTING_DATABASE_ACCOUNT: Whether or not you want to use an existing DB Account or create a new one. I don’t like when an installer is requesting you full access to a DB so I’m usually preparing the DB User upfront with only the bare minimal set of permissions required and then using this account for the Application (Documentum docbase/repository in this case)
  • SERVER.INDEXSPACE_NAME: The name of the tablespace to be used to store the indexes (to be set if using existing DB User)
  • SERVER.DATABASE_CONNECTION: The name of the Database to connect to. This needs to be available on the tnsnames.ora if using Oracle, aso…
  • SERVER.DATABASE_ADMIN_NAME: The name of the Database admin account to be used. There is no reason to put anything else than the same as the schema owner’s account here… If you configured the correct permissions, you don’t need a DB admin account at all
  • SERVER.SECURE.DATABASE_ADMIN_PASSWORD: The password of the above-mentioned account
  • SERVER.DOCBASE_OWNER_NAME: The name of the schema owner’s account to be used for runtime
  • SERVER.SECURE.DOCBASE_OWNER_PASSWORD: The password of the schema owner’s account
  • SERVER.DOCBASE_SERVICE_NAME: The name of the service to be used. To be set only when using Oracle…
  • SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION: If this docbase/repository should be a Global Registry, then set this to “USE_THIS_REPOSITORY”, otherwise do not set the parameter. If the GR is on a remote host, you need to set this to “SPECIFY_DIFFERENT_REPOSITORY” and then use a few additional parameters to specify the name of the GR repo and the host it is currently running on
  • SERVER.BOF_REGISTRY_USER_LOGIN_NAME: The name of the BOF Registry account to be created. This is usually something like “dm_bof_registry”
  • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=The password to be used for the BOF Registry account
  • SERVER.ENABLE_XHIVE: Whether or not you want to enable the XML Store Feature. As I mentioned in the blog with the licences, this is one of the thing you might want to enable the licence during the docbase/repository configuration. If you want to enable the XHIVE, you will need to specify a few additional parameters like the XDB user/password, host and port, aso…
  • SERVER.CONFIGURATOR.DISTRIBUTED_ENV: Whether or not you want to enable/configure the DMS. If you set this to true, you will need to add a few more parameters like the DMS Action to be performed, the webserver port, host, password, aso…
  • SERVER.ENABLE_RKM: Whether or not you want to enable/configure the RKM. If you set this to true, you will need to add a few more parameters like the host/port on which the keys will be stored, the certificates and password, aso…
  • START_METHOD_SERVER: Whether or not you want the JMS to be re-started again once the docbase/repository has been created. Since we usually create at least 2 docbases/repositories, we can leave it stopped there
  • MORE_DOCBASE: Never change this value, it should remain as false as far as I know
  • SERVER.CONGINUE.MORECOMPONENT: Whether or not you want to configure some additional components. Same as above, I would always let it as false… I know that the name of this parameter is strange but that’s the name that is coming from the templates… But if you look a little bit on the internet, you might be able to find “SERVER.CONTINUE.MORE.COMPONENT” instead… So which one is working, which one isn’t is still a mystery for me. I use the first one but since I always set it to false, that doesn’t have any impact for me and I never saw any errors coming from the log files.

 

Once the properties file is ready, you can install the Global Registry repository using the following command:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbase_GR.properties

 

Contrary to previous installations, this will take some time (around 20 minutes) because it needs to install the docbase/repository, then there are DARs that need to be installed, aso… Unfortunately, there is no feedback on the progress, so you just need to wait and in case something goes wrong, you won’t even notice since there are no errors shown… Therefore, check the logs to be sure!

 

2. Other repository installation

Once you have a Global Registry repository installed, you can install the repository that will be used by the end-users (which isn’t a GR then). The properties file for an additional repository is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbase_Other.properties
### Silent installation response file for a Docbase
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=true
SERVER.CONFIGURATOR.BROKER=false

### Docbase parameters
SERVER.DOCBASE_ACTION=CREATE

common.use.existing.aek.lockbox=common.use.existing
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS=false
SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_01.dbi-services.com

SERVER.DOCBASE_NAME=Docbase1
SERVER.DOCBASE_ID=1010102
SERVER.DOCBASE_DESCRIPTION=Docbase1 repository for silent install blog

SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.TEST_DOCBROKER=true
SERVER.CONNECT_MODE=dual

SERVER.USE_EXISTING_DATABASE_ACCOUNT=true
SERVER.INDEXSPACE_NAME=DM_DOCBASE1_INDEX
SERVER.DATABASE_CONNECTION=DEMODBNAME
SERVER.DATABASE_ADMIN_NAME=docbase1
SERVER.SECURE.DATABASE_ADMIN_PASSWORD=d0cb4se1P4ssw0rdDB
SERVER.DOCBASE_OWNER_NAME=docbase1
SERVER.SECURE.DOCBASE_OWNER_PASSWORD=d0cb4se1P4ssw0rdDB
SERVER.DOCBASE_SERVICE_NAME=docbase1

### Common parameters
SERVER.ENABLE_XHIVE=false
SERVER.CONFIGURATOR.DISTRIBUTED_ENV=false
SERVER.ENABLE_RKM=false
START_METHOD_SERVER=true
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$

 

I won’t list all these parameters again but just the ones that changed, except the docbase/repository name/id/description and DB accounts/tablespaces since these are pretty obvious:

  • Updated parameter’s value:
    • common.use.existing.aek.lockbox: As mentioned above, since the AEK key is now created (as part of the GR installation), this now need to be set to “common.use.existing” instead
  • Removed parameter (all these will be taken from the dfc.properties that has been updated as part of the GR installation):
    • SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION
    • SERVER.BOF_REGISTRY_USER_LOGIN_NAME
    • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD

 

Once the properties file is ready, you can install the additional repository in the same way:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbase_Other.properties

 

You now know how to install and configure a Global Registry repository as well as any other docbase/repository on a “Primary” Content Server using the silent installation provided by Documentum. In a later blog, I will talk about specificities related to a “Remote” Content Server for a High Availability environment.

 

Cet article Documentum – Silent Install – Docbases/Repositories est apparu en premier sur Blog dbi services.

TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT

Fri, 2018-08-10 09:47

When you have a Data Guard configuration, you want the application to connect to the right server, where the primary is, without taking too much time. The default TCP timeout is 1 minute which is too long. When you don’t want to configure a virtual IP address (VIP) you can simply list all the addresses in the client connection string. But then you need to reduce the timeout. A short duration in 1 to 5 seconds will be ok most of the time, but in case of network issue, you want to give a chance to retry with a longer timeout. This post is about the connection string parameters to define this. Of course, all is documented but the goal of this post is also to show how to quickly test it. Because a reliable understanding of how it works relies on both documentation and test.

Here is a simple client failover configuration where the connection tries 10.10.10.10 and, if it fails, tries 10.10.10.11

DEFAULT=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The problem with that is when the 10.10.10.10 is down then the 10.10.10.11 will be tried only after 60 seconds, the default TCP timeout. You can completely avoid waiting for the timeout by using a virtual IP that will always be up, started on the failed-over server. But you can also reduce the TCP timeout to a few seconds.

Here is a tnsping with the above tnsnames.ora entry and when both servers are down:


$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 2m0.051s
user 0m0.005s
sys 0m0.011s

That’s 2 minutes because there is a 1 minute timeout for each address.

TRANSPORT_CONNECT_TIMEOUT

Now, just adding the TRANSPORT_CONNECT_TIMEOUT to the connection string description to reduce the timout to 4 seconds:


DESCRIPTION=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=4)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The total time to get the answer from both addresses is 8 seconds – 4 second for each:

$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m8.023s
user 0m0.010s
sys 0m0.006s

RETRY_COUNT

If you lower the timeout, you may give a chance to retry a few times with RETRY_COUNT. There, RETRY_COUNT=2 will give 3 attempts ( 1 + 2 retries ) to the address list:

$ time tnsping RETRY_COUNT
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:49:34
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (RETRY_COUNT=2) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m24.049s
user 0m0.011s
sys 0m0.010s

This has tried 10.10.10.10 and then 10.10.10.11 for 4 seconds each, and then retried 2 times wich in total takes 2x4x4=24 seconds

DESCRIPTION_LIST

The TRANSPORT and RETRY_COUNT are used only in the DESCRIPTION. You may want to give several attempts with an increasing timeout. For example: try each address for one second to get a quick connection to the primary, wherever it is, when the network is in good health. Then give two attempts with a 5 seconds timeout for bad network times. And then one final attempt to each with the default timeout to be sure that the servers are down.

You can use a DESCRIPTION_LIST for this:

INCREASING=
(DESCRIPTION_LIST=
(LOAD_BALANCE=off)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=5)
(RETRY_COUNT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=2)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
)

Rather than just time the total attempts, I’ll strace each connections:

$ strace -tT tnsping INCREASING 2>&1 | grep -C1 --color=auto -E 'poll.*|inet_addr[()".0-9]*'
 
16:15:49 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000008>
16:15:49 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000087>
16:15:49 times(NULL) = 434920117 <0.000011>
16:15:49 mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efce31bc000 <0.000013>
16:15:49 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.001435>
16:15:50 close(4) = 0 <0.000256>
--
16:15:50 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000060>
16:15:50 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000495>
16:15:50 times(NULL) = 434920218 <0.000062>
16:15:50 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.000768>
16:15:51 close(4) = 0 <0.000050>
--
16:15:51 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000015>
16:15:51 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000060>
16:15:51 times(NULL) = 434920318 <0.000010>
16:15:51 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.005563>
16:15:56 close(4) = 0 <0.000027>
--
16:15:56 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000012>
16:15:56 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000081>
16:15:56 times(NULL) = 434920819 <0.000015>
16:15:56 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.006265>
16:16:01 close(4) = 0 <0.000192>
--
16:16:01 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000079>
16:16:01 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000486>
16:16:01 times(NULL) = 434921320 <0.000087>
16:16:01 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.004660>
16:16:06 close(4) = 0 <0.000611>
--
16:16:06 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000114>
16:16:06 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000536>
16:16:06 times(NULL) = 434921822 <0.000097>
16:16:06 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.008128>
16:16:11 close(4) = 0 <0.000135>
--
16:16:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000137>
16:16:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000584>
16:16:11 times(NULL) = 434922323 <0.000079>
16:16:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.053782>
16:17:11 close(4) = 0 <0.000166>
--
16:17:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000195>
16:17:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000549>
16:17:11 times(NULL) = 434928329 <0.000488>
16:17:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.007246>
16:18:11 close(4) = 0 <0.000043>

With ‘-T’ strace shows the duration of the poll() system call between brackets after the return code. You can see here 1-second timeout attempts to each address, then 2 attempts with 5 seconds timeout and then 60 seconds.

Note that I have added (LOAD_BALANCE=OFF) here because the default is ON in a DESCRIPTION_LIST but here I want to take them in the order I specified them.

 

Cet article TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT est apparu en premier sur Blog dbi services.

ATP vs ADW – the Autonomous Database lockdown profiles

Thu, 2018-08-09 14:49

The Oracle database has always distinguished two types of workloads: transactional (OLTP) and datawarehouse (VLDB, DWH, DSS, BI, analytics). There is the same idea in the managed Oracle Cloud with two autonomous database services.

To show how this is old, here is how they were defined in the Oracle7 Tuning Book:

CaptureOLTPvsDSS

The definition has not changed a lot. But the technology behind DSS/DWH has improved. Now, with In-Memory Column Store, Smart Scan, Result Cache we can even see that indexes, materialized views, star transformation, hints,.. are disabled in the Autonomous Datawarehouse cloud service.

The difference between the two autonomous cloud services, ATP (Autonomous Transaction Processing) for OLTP and ADW (Autonomous Datawarehouse) for analytics have been described by Maria Colgan after Larry Ellison announce:
https://sqlmaria.com/2018/08/07/how-does-autonomous-transaction-processing-differ-from-the-autonomous-data-warehouse/

PDBaaS

Those autonomous services are PDB as a Service. They are using the consolidation and isolation features of 12cR2 multitenant. And we can even see that the ATP and ADW services can run within the same CDB and same instance. They are different PDBs and they differ only by their resource management plans and PDB lockdown profiles. So let’s see the differences from this point of view: ATP lockdown profile is called ‘OLTP’ and ADW lockdown profile is called ‘DWCS’.

Options

The only difference is about partitioning which is enabled for ATP and disabled for ASW

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION';
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
OLTP OPTION PARTITIONING ENABLE ALL 284
 
SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION' order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------- ----- ------
DWCS OPTION PARTITIONING DISABLE ALL 73

Features

All disabled features are the same:
Disabled for all users: COMMON_SCHEMA_ACCESS, CONNECTIONS, CTX_LOGGING, NETWORK_ACCESS, OS_ACCESS, SYSTEM_DATA
Disabled for local users only: USER_PASSWORD_VERIFIERS
Enabled for all users: COMMON_USER_CONNECT, TRACE_VIEW_ACCESS, UTL_FILE

Those last ones are good news. We can query V$DIAG_OPT_TRACE_RECORDS, V$DIAG_SQL_TRACE_RECORDS, $DIAG_TRACE_FILE_CONTENTS to show some diagnostics. Unfortunately, I’ve seen no allowed ways to enable SQL Trace.

DDL Statements

In the ADW service, the local user cannot create indexes and materialized views, They are supposed to be autonomously created when required (probably by a common user):

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where regexp_like(rule,'(INDEX|MATERIALIZED)') order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
DWCS STATEMENT ALTER INDEX DISABLE LOCAL 73
DWCS STATEMENT ALTER INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEX DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT DROP INDEX DISABLE LOCAL 73
DWCS STATEMENT DROP INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW LOG DISABLE LOCAL 73

The ATP service enables all those for all users.

System parameters

Both services disable all ALTER SYSTEM statements except KILL SESSION. They allow all ALTER SYSTEM SET for the common users but allow only a few parameters to be changed by local users.

Basically, the ADW service sets the following:


_default_pct_free=1
_ldr_io_size=33554432
_ldr_io_size2=33554432
_max_io_size=33554432
_optimizer_allow_all_access_paths=FALSE
_optimizer_answering_query_using_stats=TRUE
optimizer_ignore_hints=TRUE
optimizer_ignore_parallel_hints=TRUE
parallel_degree_policy=AUTO
parallel_min_degree=CPU
result_cache_max_result=1
result_cache_mode=FORCE

The ATP service keeps the defaults.

Basically, all the features for OLTP are there for years in the Oracle Database. Under the ‘autonomous’ umbrella we can see some well-known features

  • Automatic provisioning: pluggable databases
  • Automatic scaling: PDB resource manager
  • Automatic tuning: SQL Plan Management, Adaptive Plans, SQL Tuning Advisor, Storage Indexes
  • Automatic security: Rolling patches, Encryption, Database Vault
  • Automatic Fault Tolerant Failover: RAC, Data Guard
  • Automatic Backup and Recovery: RMAN, Flashback

All these proven features are enhanced to work together in a consolidated multitenant environment. Povisioning is as fast as a CREATE PDB. The ATP and ADW services enable and configure the right set of features for each workload.

 

Cet article ATP vs ADW – the Autonomous Database lockdown profiles est apparu en premier sur Blog dbi services.

Using Oratcptest

Thu, 2018-08-09 10:32

Last day I did some tests with Oratcptest. This a command line tool which can be used to measure network bandwidth and network latency between client and server.
In this blog I am going to show how to use this tool in a DataGuard environment.
I am using a DataGuard environment with following configuration.


Configuration - PROD

  Protection Mode: MaxAvailability
  Members:
  PROD_SITE1 - Primary database
    PROD_SITE2 - Physical standby database
    PROD_SITE3 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 54 seconds ago)

DGMGRL> show database 'PROD_SITE1' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE2' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE3' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL>

The installation of Oratcptest is very simple. We just have to download on both servers the oratcptest.jar file from the oracle support site. Note that JRE 6 or higher is required. In my case I have JRE 8 on both servers

[oracle@primaserver oratcptest]$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
[oracle@primaserver oratcptest]$

[oracle@standserver1 oratcptest]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver1 oratcptest]$

[oracle@standserver2 ~]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver2 ~]$

We can invoke the help command to see all options available for the oratcptest

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar -help

Now we can assess the network bandwidth for our DataGuard. Note that I am using simple virtual machines. But the steps will be the same on real productions servers.
We first have to determine the highest volume of redo log in my database. Following Oracle query can be used.

select thread#,sequence#,blocks*block_size/1024/1024 MB,(next_time-first_time)*86400 sec, blocks*block_size/1024/1024/((next_time-first_time)*86400) "MB/s" from v$archived_log where ((next_time-first_time)*864000) and first_time between  to_date('2018/08/09 08:00:00','YYYY/MM/DD HH24:MI:SS') and to_date('2018/08/09 11:00:00','YYYY/MM/DD HH24:MI:SS') and dest_id=2 order by first_time;

   THREAD#  SEQUENCE#         MB        SEC       MB/s
---------- ---------- ---------- ---------- ----------
         1        124 .003417969          9 .000379774
         1        125 .271972656        184 .001478112
         1        126 .001953125          5 .000390625
         1        127 11.3662109        915 .012422088
         1        128 10.8466797       6353 .001707332

We can see that the highest value is .012422088 MB/s. The goal is to see if our network bandwidth can support this rate.
As we are using SYNC mode, the primary database will wait for a confirmation from standby databases that they have written the change to disk before informing the application of the commit success.
For SYNC transport we then have to collect the Average redo write size which is calculated using following formula

Average=redo size / redo writes

These metrics can be obtained using an AWR report. In our case the value is

Average=15924844/4015=3967

Now we are going to simulate SYNC writes over the network using Oratcptest. Note I need the location of my standby redo logs

SQL> select member from v$logfile where type='STANDBY';

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PROD/stredo01.log
/u01/app/oracle/oradata/PROD/stredo02.log
/u01/app/oracle/oradata/PROD/stredo03.log
/u01/app/oracle/oradata/PROD/stredo04.log

From the standby server I can run following command

[oracle@standserver1 oratcptest]$ java -jar oratcptest.jar -server -port=5555 -file=/u01/app/oracle/oradata/PROD/myoratcp.tmp
OraTcpTest server started.

From the primary server

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555  -write  -mode=sync -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = SYNC
        Disk write             = YES
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:26:16) The server is ready.
                    Throughput             Latency
(14:26:18)      1.124 Mbytes/s            3.375 ms   (disk-write 2.537 ms)
(14:26:20)      0.813 Mbytes/s            4.668 ms   (disk-write 3.775 ms)
(14:26:22)      1.094 Mbytes/s            3.467 ms   (disk-write 2.773 ms)
(14:26:24)      1.004 Mbytes/s            3.778 ms   (disk-write 2.991 ms)
(14:26:26)      0.560 Mbytes/s            6.779 ms   (disk-write 5.623 ms)
(14:26:26) Test finished.
               Socket send buffer = 11700 bytes
                  Avg. throughput = 0.920 Mbytes/s
                     Avg. latency = 4.126 ms (disk-write 3.280 ms)

[oracle@primaserver oratcptest]$

We can see that the Average throughput is 0.920 M/s which is sufficient to handle our highest peak rate which is .012422088 MB/s. We can also note the latency which includes the time to send the message to the server host, the optional disk write at the server host, and the acknowledgment back to the client process
If we are using ASYNC mode the test will be like

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555    -mode=async -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = ASYNC
        Disk write             = NO
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:58:03) The server is ready.
                    Throughput
(14:58:05)     13.897 Mbytes/s
(14:58:09)      5.193 Mbytes/s
(14:58:11)     40.007 Mbytes/s
(14:58:13)     21.475 Mbytes/s
(14:58:13) Test finished.
               Socket send buffer = 210600 bytes
                  Avg. throughput = 16.901 Mbytes/s

[oracle@primaserver oratcptest]$

Conclusion
In this blog we have talked about Oratcptest which is a simple tool which can help to assess our network bandwidth. Oratcptest can help before performing Rman Backups, file copies, a DataGuard environment.
Reference : Oracle Doc ID 2064368.1

 

Cet article Using Oratcptest est apparu en premier sur Blog dbi services.

MERGE JOIN CARTESIAN: a join method or a join type?

Wed, 2018-08-08 17:00

I’ll present about join methods at POUG and DOAG. I’ll show how the different join methods work in order to better understand them. The idea is to show Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian on the same query. I’ll run a simple join between DEPT and EMP with the USE_NL, USE_HASH, USE_MERGE and USE_MERGE_CARTESIAN hints. I’ll show the execution plan, with SQL Monitoring in text mode. And I’ll put some gdb breakpoints on the ‘qer’ (query execution rowsource) functions to run the plan operations step by step. Then I’ll do the same on a different query in order to show in detail the 12c adaptive plans.

But wait, I listed Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian… but is Merge Cartesian Join really a join method? I mean, my query is not a cartesian join. I have all join predicates here. But for sure you can also do an inner join by starting with a cartesian join and then filter on the join predicate. As if doing physically what the old join syntax of Oracle is doing logically: by not putting any predicates in the from clause and add the join predicates in the where clause to filter over it.

If I look at the 12.2 documentation, it is a Join method
CaptureJoinMethods122

For the definition, a Join Method is how the join will be executed. It is not a decision of the SQL developer because SQL is declarative: you declare the result you want, and the optimizer will decide how to do it. And this is why hints are in comments: they are not part of the declarative syntax. Forcing how to do it is not part of SQL.

Just after listing the join methods, the documentation lists the join types which are part of the SQL because it declares the join result you expect. Inner join to get all matching rows. Semi join to get only the first matching row. Anti Join to get all rows which do not match. Outer join to get all matching rows in addition to those which matches. The syntax is INNER JOIN, OUTER JOIN, EXISTS or IN, NOT EXISTS or NOT IN. Join type is not ‘how’ but ‘what’.

Ok, so back to the join method. Let’s force it on my inner join between DEPT and EMP:

11g


SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
10 RESEARCH DALLAS 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 RESEARCH DALLAS 7839 KING PRESIDENT 17-NOV-81 5000
10 RESEARCH DALLAS 7934 MILLER CLERK 7782 23-JAN-82 1300
10 SALES CHICAGO 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 SALES CHICAGO 7839 KING PRESIDENT 17-NOV-81 5000
10 SALES CHICAGO 7934 MILLER CLERK 7782 23-JAN-82 1300
10 OPERATIONS BOSTON 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 OPERATIONS BOSTON 7839 KING PRESIDENT 17-NOV-81 5000
10 OPERATIONS BOSTON 7934 MILLER CLERK 7782 23-JAN-82 1300
 
12 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 2034389985
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 12 |00:00:00.01 | 7 | | | |
| 1 | MERGE JOIN CARTESIAN| | 1 | 14 | 12 |00:00:00.01 | 7 | | | |
| 2 | TABLE ACCESS FULL | DEPT | 1 | 4 | 4 |00:00:00.01 | 4 | | | |
| 3 | BUFFER SORT | | 4 | 4 | 12 |00:00:00.01 | 3 | 2048 | 2048 | 2048 (0)|
|* 4 | TABLE ACCESS FULL | EMP | 1 | 4 | 3 |00:00:00.01 | 3 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - filter("DEPT"."DEPTNO"="EMP"."DEPTNO")
 

Ok, then I declared my result with an inner join query, and I forced the join method with a hint to show that it is possible. But look at the result. 12 rows? Only DEPTNO 10 where the SCOTT schema has employees in 10, 20 and 30? And only 3 employees here, repeated 4 times for each department name? That’s wrong result.

NEVER FORCE A CARTESIAN JOIN WITH USE_MERGE_CARTESIAN!

That’s a very old bug: Bug 17064391 Wrong result with USE_MERGE_CARTESIAN hint finally fixed in 12c (12.2 and backported in 12.1 PSU)

Then how is it fixed?

18c

With the fix, the hint is just ignored and a SORT MERGE JOIN is used here:

SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
20 RESEARCH DALLAS 7566 JONES MANAGER 7839 02-APR-81 2975
20 RESEARCH DALLAS 7902 FORD ANALYST 7566 03-DEC-81 3000
20 RESEARCH DALLAS 7876 ADAMS CLERK 7788 23-MAY-87 1100
20 RESEARCH DALLAS 7369 SMITH CLERK 7902 17-DEC-80 800
20 RESEARCH DALLAS 7788 SCOTT ANALYST 7566 19-APR-87 3000
30 SALES CHICAGO 7521 WARD SALESMAN 7698 22-FEB-81 1250 500
30 SALES CHICAGO 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0
30 SALES CHICAGO 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300
30 SALES CHICAGO 7900 JAMES CLERK 7698 03-DEC-81 950
30 SALES CHICAGO 7698 BLAKE MANAGER 7839 01-MAY-81 2850
30 SALES CHICAGO 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400
 
14 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 1407029907
 
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 12 | 12 | | | |
| 1 | MERGE JOIN | | 1 | 14 | 14 |00:00:00.01 | 12 | 12 | | | |
| 2 | SORT JOIN | | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 3 | TABLE ACCESS FULL| DEPT | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | | | |
|* 4 | SORT JOIN | | 4 | 14 | 14 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 5 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | 6 | | | |
--------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")
filter("DEPT"."DEPTNO"="EMP"."DEPTNO")

So here the result is good, thanks to the fix, and we clearly see how it is fixed: the USE_MERGE_CARTESIAN hint has been ignored.

And the funny thing is that when you look at the 18c documentation, the Merge Join Cartesian is not a join method anymore but a join type:
CaptureJoinMethods183

Exactly the same paragraph, but now in join types (the ‘what’) rather than in join methods (the ‘when’).

What or How?

Actually, in my opinion, it is both. When you explicitly want a cartesian join, that’s a join type described by the CROSS JOIN in the ANSI join syntax, or the lack of related predicates in the old syntax. This is ‘what’. But you may also encounter a MERGE JOIN CARTESIAN for a non-cartesian join just because the optimizer decides it is more efficient. When you have very few rows on both sides, it may be faster to start with a cartesian product on small rowsources. This can be part of star transformation where fact rows are joined back to the cartesian product of filtered dimensions in order to project the dimension attributes. This is ‘how’ it will be executed. We also see it when the optimizer underestimates the cardinalities and is followed by a long nested loop.

When?

So, let’s look at the documentation “When the Optimizer Considers Cartesian Joins”:
CaptureJoinMethods183-when

  • No join condition exists: that’s when cartesian product is what we want
  • The ORDERED hint specifies a table before its join table is specified: that’s when it is the only join method possible with the specified join order
  • A Cartesian join is an efficient method: then it is a method there, even if documented in join types.

In conclusion, cartesian join is a join type. It can also be used as a join method when the optimizer decides to. But you cannot decide it yourself by hinting since 12c, and trying to do so in previous version is a very bad idea and can returns wrong results.

So, for this one I’ll explicitely run a CROSS JOIN:
CaptureUSEMERGECARTESIAN

The query is on top. The SQL monitor in the middle, showing that we are currently active on reading rows from EMP. The bottom shows the ‘qer’ functions backtrace: the fetch call is propagated from opifch2 for the SELECT STATEMENT, through the MERGE JOIN CARTESIAN (querjo), the BUFFER SORT (qerso), to the TABLE ACCESS (qertb).

So basically, the goal of this full-demo presentation is to show how to read the execution plan by understanding how it is executed. This qertbFetch on the inner table EMP is executed only on the first row coming from the outer table DEPT. As the rows are returned to a buffer, the further iterations will fetch only from this buffer and will not go further than qersoFetchSimple. The qersoProcessULS (‘process underlying row source’ – see Frits Hoogland annotations) is run only once. This is the big difference with Nested Loop where the inner loop on the underlying rowsource is run for each outer loop iteration: those two loops are nested – thus the name. But the function for the join part is the same for Nested Loop, Sort Merge Join and Merge Join Cartesian: qerjo. Only the underlying operations differenciate the join methods.

Last comment, we don’t see any function which really sort the rows in this buffer (as we will see for the Sort Merge Join method) because there is no sorting despites the name of the BUFFER SORT operation. More info on Jonathan Lewis blog.

 

Cet article MERGE JOIN CARTESIAN: a join method or a join type? est apparu en premier sur Blog dbi services.

Backing up and restoring EDB containers in MiniShift/OpenShift

Wed, 2018-08-08 09:38

The last blogs in the series are already some days old: Setting up MiniShift, Deploying EDB containers in MiniShift/OpenShift, Customizing PostgreSQL parameters in EDB containers in MiniShift/OpenShift, Scaling the EDB containers in MiniShift/OpenShift, EDB Failover Manager in EDB containers in Minishift/OpenShift and EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers. What is missing is how you can backup and restore instances running in this container deployment and that is the topic of this post.

What you usually use to backup and restore EDB Postgres is BART and the container world is no exception to that. Lets see how that works.

My current deployment looks like this:
Selection_013

Two pgpool containers are serving three database containers which you can also check on the command line:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP           NODE        ROLE
edb-as10-0-1-b8lvj   1/1       Running   0          3m        172.17.0.9   localhost   masterdb
edb-as10-0-1-gj76h   1/1       Running   0          1m        172.17.0.5   localhost   standbydb
edb-as10-0-1-sb5lt   1/1       Running   0          2m        172.17.0.4   localhost   standbydb
edb-pgpool-1-qzk5v   1/1       Running   0          3m        172.17.0.7   localhost   queryrouter
edb-pgpool-1-rvtl6   1/1       Running   0          3m        172.17.0.6   localhost   queryrouter

What we want to do is to backup the database instances or at least one of them. What you need to prepare before deploying the BART container is shared storage between the databases containers and the BART container. The is especially important for the restore case as the restore procedure needs to access the backup which is hosted in the BART container. Notice that this storage configuration has the “Read-Write-Many” attributes:
Selection_030

When I initially deployed the database containers I provided exactly these storage claim and volume as a parameter so I have that available in the database containers:
Selection_031

This means in any of the database containers I will be able to see the backup volume:

dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/
total 12
drwxrwx---  3 root    root 4096 Aug  6 11:49 .
drwxr-xr-x 86 root    root 4096 Aug  8 14:03 ..
drwxrwxr-x  4 edbuser root 4096 Aug  6 11:49 edb-bart-1-89k7s
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/
total 16
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 .
drwxrwx--- 3 root    root 4096 Aug  6 11:49 ..
drwxrwxr-x 2 edbuser root 4096 Aug  6 11:49 bart_log
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 pgbackup
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/
total 12
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 .dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/
total 12
drwxrwx---  3 root    root 4096 Aug  6 11:49 .
drwxr-xr-x 86 root    root 4096 Aug  8 14:03 ..
drwxrwxr-x  4 edbuser root 4096 Aug  6 11:49 edb-bart-1-89k7s
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/
total 16
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 .
drwxrwx--- 3 root    root 4096 Aug  6 11:49 ..
drwxrwxr-x 2 edbuser root 4096 Aug  6 11:49 bart_log
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 pgbackup
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/
total 12
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 .
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 edb
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/edb/
total 16
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 .
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:52 1533556356576
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:49 archived_wals
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 edb
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/edb/
total 16
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 .
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:52 1533556356576
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:49 archived_wals

The same storage configuration then needs to be provided to the BART deployment. Here is the yaml file for the BART deployment:

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-pgpool:v3.5
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres'
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-as:v10.3
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: ${BACKUP_PERSISTENT_VOLUME}
            mountPath: /edbbackup
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: ${BACKUP_PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${BACKUP_PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
              
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: BACKUP_PERSISTENT_VOLUME
  displayName: Backup Persistent Volume
  description: Backup Persistent volume name
  value: ''
  required: false
- name: BACKUP_PERSISTENT_VOLUME_CLAIM
  displayName: Backup Persistent Volume Claim
  description: Backup Persistent volume claim name
  value: ''
  required: false
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Once that is imported we can deploy the BART container:
Selection_032
Notice that this is actually the same storage configuration as it was used to setup the database containers.
Selection_033

What I didn’t tell you is that you need to do another step before. As the BART container is supposed to backup all the instances in a project we need to pass the BART configuration file to the container via a configMap. In this setup I only have one instance so the configMap would look like this:
Selection_034

Here you would add all the instances you’d need to backup per project. Once the BART container is ready:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-b8lvj   1/1       Running   0          17m
edb-as10-0-1-gj76h   1/1       Running   0          14m
edb-as10-0-1-sb5lt   1/1       Running   0          16m
edb-bart-1-7cgfv     1/1       Running   0          19s
edb-pgpool-1-qzk5v   1/1       Running   0          17m
edb-pgpool-1-rvtl6   1/1       Running   0          17m

… you can connect to it and perform a manual backup:

dwe@dwe:~$ oc rsh edb-bart-1-7cgfv
sh-4.2$ bart backup -s edb
INFO:  creating backup for server 'edb'
INFO:  backup identifier: '1533738106320'
65043/65043 kB (100%), 1/1 tablespace

INFO:  backup completed successfully
INFO:  backup checksum: 16fba63925ac3e77d474a36496c2a902 of base.tar
INFO:  
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1533738106320
BACKUP NAME: none
BACKUP PARENT: none
BACKUP LOCATION: /edbbackup/edb-bart-1-7cgfv/pgbackup/edb/1533738106320
BACKUP SIZE: 63.52 MB
BACKUP FORMAT: tar
BACKUP TIMEZONE: UTC
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 1
 ChkSum                             File      
 16fba63925ac3e77d474a36496c2a902   base.tar  

TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000008
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-08-08 14:21:46 UTC
STOP TIME: 2018-08-08 14:21:47 UTC
TOTAL DURATION: 1 sec(s)

This backup is now available on the BART container but in addition it is accessible in the database container:

dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/edb-bart-1-7cgfv/pgbackup/edb/1533738106320/
total 65060
drwxr-xr-x 2 edbuser root     4096 Aug  8 14:21 .
drwxr-xr-x 4 edbuser root     4096 Aug  8 14:21 ..
-rwxr-xr-x 1 edbuser root      664 Aug  8 14:21 backupinfo
-rwxr-xr-x 1 edbuser root 66605568 Aug  8 14:21 base.tar

In case you’d need to restore that you would deploy a new database configuration specifying this backup as the “Restore file”:
Selection_035

One downside with the current versions of the containers: You can not do point in time recovery. Only restores from full backups are supported until now. This will change in the next release, though.

Have fun with the containers …

 

Cet article Backing up and restoring EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

New installed Fusion Middleware Reports or Forms WebLogic Servers fails to start after configuring SSL

Wed, 2018-08-08 06:52

We installed a Fusion Middleware reports and Forms 12.2.1.2.0 on a Linux Server.
Those are single node Reports and Forms WebLogic Servers.
After disabling the Non SSL Listen Port, we got the below error in the WebLogic Servers log files while starting the Managed Server with only SSL ports enabled.

####<Aug 19, 2017 9:22:44 PM GMT+00:00> <Notice> <Security> <host01.example.com> <WLS_Reports> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320441764877> <BEA-090082> <Security initializing using security realm myrealm.>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Critical> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765180> <BEA-000386> <Server subsystem failed. Reason: java.lang.AssertionError: Servers do not have a common channel to communicate over
java.lang.AssertionError: Servers do not have a common channel to communicate over
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.createConfiguration(ConfiguredServersMonitorImpl.java:124)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.<init>(ConfiguredServersMonitorImpl.java:55)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.<init>(ConfiguredServersMonitorImpl.java:28)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl$Factory.<clinit>(ConfiguredServersMonitorImpl.java:39)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.getInstance(ConfiguredServersMonitorImpl.java:44)
at weblogic.cluster.messaging.internal.server.UnicastFragmentSocket.<init>(UnicastFragmentSocket.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:355)
at java.lang.Class.newInstance(Class.java:308)
at weblogic.cluster.FragmentSocketWrapper.getUnicastFragmentSocket(FragmentSocketWrapper.java:76)
at weblogic.cluster.FragmentSocketWrapper.<init>(FragmentSocketWrapper.java:64)
at weblogic.cluster.FragmentSocketWrapper.getInstance(FragmentSocketWrapper.java:47)
at weblogic.cluster.MulticastManager.<init>(MulticastManager.java:158)
at weblogic.cluster.MulticastManager.initialize(MulticastManager.java:111)
at weblogic.cluster.ClusterService.startService(ClusterService.java:196)
at weblogic.server.ServiceActivator.start(ServiceActivator.java:96)
at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Notice> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765413> <BEA-000365> <Server state changed to FAILED>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Error> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765414> <BEA-000383> <A critical service failed. The server will shut itself down>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Notice> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765426> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Info> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765447> <BEA-000236> <Stopping execute threads.>

Even if no cluster is used, the default Reports and Forms domain configuration creates  WebLogic Clusters.

To solve “Servers do not have a common channel to communicate over” errors do the following steps for each cluster as administrator

  1. Login into the Administration Console
  2. Expand the Environment tab and navigate to Clusters
  3. In production mode, you need to select “Lock and edit”
  4. Go to General TAB and select the ‘Replication’ subTAB
  5. In the -> Replication -> find Replication Channel, check box beside “Secure Replication Enabled” (to set it as true)
  6. Activate the changes
  7. Restart the managed server(s).

Or in WLST:

Start wlst.sh(*)  and connect to the WebLogic Domain

connect('AdminUser','password','t3s://host01.example.com:7002')

startEdit()

cd('/Clusters/cluster_reports')
cmo.setClusterType('none')
cmo.setPersistSessionsOnShutdown(false)
cmo.setReplicationChannel('ReplicationChannel')
cmo.setSecureReplicationEnabled(true)

activate()

(*) wlst.sh is located in the Middleware software home $M_HOME/oracle_common/common/bin directory

 

Cet article New installed Fusion Middleware Reports or Forms WebLogic Servers fails to start after configuring SSL est apparu en premier sur Blog dbi services.

How much free space can be reclaimed from a segment?

Tue, 2018-08-07 14:06

You have the feeling that your table takes more blocks than it should? Here are the queries I use to quickly check the free space. The idea is to call DBMS_SPACE.SPACE_USAGE and infer the minimum space from the percentages. For example, a block in FS3 (defined as having at least 50 to 75% free space) is supposed to have at least 50% of free space. Of course it can have more, but you don’t know.

Here is some PL/SQL to do so:

set serveroutput on
declare
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
for i in (select * from (select * from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','INDEX','INDEX PARTITION','INDEX SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc) where 10>=rownum)
loop
begin
dbms_space.space_usage(i.owner,i.segment_name,i.segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>i.partition_name);
dbms_output.put_line(to_char((unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75)/1024/1024/1024,'999G999D999')||' GB free in '||i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'"');
exception
when others then dbms_output.put_line(i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'": '||sqlerrm);
end;
end loop;
end;
/

The output looks like:

.001 GB free in INDEX "DEMO"."ACCOUNT_PK" partition ""
.001 GB free in TABLE "APEX_040200"."WWV_FLOW_PAGE_PLUGS" partition ""
.009 GB free in TABLE "SCOTT"."DEMO" partition ""
.000 GB free in TABLE "APEX_040200"."WWV_FLOW_STEP_ITEMS" partition ""
.003 GB free in INDEX "SYS"."WRH$_SYSMETRIC_HISTORY_INDEX" partition ""
.000 GB free in TABLE "MDSYS"."SDO_CS_SRS" partition ""
.002 GB free in INDEX "SYS"."I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST" partition ""
.006 GB free in TABLE "SYS"."WRH$_SYSMETRIC_HISTORY" partition ""
.002 GB free in TABLE "SYS"."WRH$_SQL_PLAN" partition ""

If you are in 12c, an inline function in the query might come handy:

with function freebytes(segment_owner varchar2, segment_name varchar2, segment_type varchar2,partition_name varchar2) return number as
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
dbms_space.space_usage(segment_owner,segment_name,segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>partition_name);
return unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75;
end;
select round(freebytes(owner,segment_name,segment_type,partition_name)/1024/1024/1024,3) free_GB,segment_type,owner,segment_name,partition_name
from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','INDEX','INDEX PARTITION','INDEX SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc fetch first 10 rows only
/

The result looks like:

FREE_GB SEGMENT_TYPE OWNER SEGMENT_NAME PARTITION_NAME
------- ------------ ----- ------------ --------------
0 TABLE DEMO ACCOUNTS
0.001 INDEX DEMO ACCOUNT_PK
0.001 TABLE APEX_040200 WWV_FLOW_PAGE_PLUGS
0.009 TABLE SCOTT DEMO
0.003 INDEX SYS WRH$_SYSMETRIC_HISTORY_INDEX
0 TABLE APEX_040200 WWV_FLOW_STEP_ITEMS
0.002 INDEX SYS I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST
0 TABLE MDSYS SDO_CS_SRS
0.006 TABLE SYS WRH$_SYSMETRIC_HISTORY
0.002 TABLE SYS WRH$_SQL_PLAN

Future evolution will be published on GitHub:
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_plsql.sql
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_sql.sql

Note that having free space does not mean that you have to shrink or reorg. Try to understand what happened to your data before, and wheter this space will be reused soon.

 

Cet article How much free space can be reclaimed from a segment? est apparu en premier sur Blog dbi services.

Catching interface state changes with Network Manager

Mon, 2018-08-06 10:08

Sometimes it is required that you can react on state changes of your network interfaces. Maybe you want to get notified by mail or you want to execute a script that does something in case an event is trapped. As usual I will be using CentOS 7 for this little demonstration but as most distributions use systemd and NetworkManager nowadays this should work more or less the same on Debian based distributions.

As this will be a short post anyway: The way to do it is to use the dispatcher scripts coming with Network Manager. All you need to do for getting notified on down or up events is to create a script in /etc/NetworkManager/dispatcher.d/:

root@:/etc/NetworkManager/dispatcher.d/ [] pwd
/etc/NetworkManager/dispatcher.d
root@:/etc/NetworkManager/dispatcher.d/ [] cat 30-network 
#!/bin/bash

INTERFACE=$1
ACTION=$2

echo $INTERFACE >> /var/log/tmp.log
echo $ACTION >> /var/log/tmp.log

The script will get two parameters passed in by default: The name of the interface and the action that just happened. What you do with these is totally up to you. The only point you need to take care of (which is also mentioned in the documentation): “Each script should be a regular executable file, owned by root. Furthermore, it must not be writable by group or other, and not setuid.”

root@:/etc/NetworkManager/dispatcher.d/ [] ls -la
total 16
drwxr-xr-x. 5 root root  131 Aug  6 17:02 .
drwxr-xr-x. 7 root root  134 Jul 20 10:18 ..
-rwxr-xr-x. 1 root root  175 Jan  2  2018 00-netreport
-rwxr-xr-x. 1 root root 1123 May 15 15:03 11-dhclient
-rwxr-xr-x. 1 root root  985 Sep 15  2017 20-chrony
-rwxr-xr-x. 1 root root  108 Aug  6 17:02 30-network
drwxr-xr-x. 2 root root    6 Jun 27 16:39 no-wait.d
drwxr-xr-x. 2 root root    6 Jun 27 16:39 pre-down.d
drwxr-xr-x. 2 root root    6 Jun 27 16:39 pre-up.d

As soon as you have your script in place it will get kicked off when the interface state is changing:

root@:/etc/NetworkManager/dispatcher.d/ [] systemctl restart network
root@:/etc/NetworkManager/dispatcher.d/ [] cat /var/log/tmp.log

connectivity-change
enp0s3
down
enp0s8
down
enp0s3
up

connectivity-change
enp0s8
up

Hope that helps …

 

Cet article Catching interface state changes with Network Manager est apparu en premier sur Blog dbi services.

Pages