Feed aggregator

AWS: How to delete a static website via aws cli

Dietrich Schroff - Thu, 2018-11-22 14:22
After the creation of a static website in S3 via cli, now the deletion:

First try was:

$ aws s3api delete-bucket --bucket my.webtest



An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty
Ok. This will not work. First get the objects:

$ aws s3api list-objects --bucket my.webtest

{

    "Contents": [

        {

            "LastModified": "2018-11-17T19:18:53.000Z",

            "ETag": "\"e56b419be959169c15260cd721735e47\"",

            "StorageClass": "STANDARD",

            "Key": "index.html",

            "Owner": {

                "DisplayName": "d.schroff",

                "ID": "6c301aed95f62fb17532da6c93209c898a1e07051e520c6bb7fab30769cc495c"

            },

            "Size": 568

        }

    ]

}
and the bucket can be deleted:
$ aws s3api delete-bucket --bucket my.webtest
A crosscheck via web console:


And the website is not there anymore:

Oracle Privilege Analysis Now Free in EE from 18c and back ported to all 12c

Pete Finnigan - 4 hours 30 min ago
Wow!!, i just got an email from someone in Oracle to let me know that the Privilege Analysis feature of Database Vault has had its licensing changed from this week to now be free as part of an Enterprise Edition....[Read More]

Posted by Pete On 20/11/18 At 10:06 PM

Categories: Security Blogs

Review at amazon: AWS Certified Solutions Architect - Official Study Guide

Dietrich Schroff - Tue, 2018-11-20 11:46
This weekend i read this book:

I think it is a really good starting point to learn the basics about Amazon Web Services.
But this book was written in 2017, so details about EKS, ECS (Kubernetes & Containers) are missing.

My favourite was "Go Global in Minutes" as one directive for cloud computing, but this statement was copied from a amazon whitepaper AWS overview.



If you are interested, take a look at my review at amazon.de (like all my reviews: written in german ;-).

ODA and CIS / GDPR features

Yann Neuhaus - Tue, 2018-11-20 10:50

We all know that security becomes…sorry, is one of the hottest topic when setting up IT environment. One basis for that is to be compliant with regulations or standards such as GDPR or CIS. What is not so well known, is that ODA already integrates some tool to support you for that.

During this first day @DOAG2018 I followed and interesting session from Tammy Bednar, Senior Director of Product Management for ODA, about ODA and Security.

Beside the traditional points about the integrated stack of ODA, SUDO configuration or the Oracle Database Security options, I also heard about nice scripts available on ODA since version 12.2.1.3 to check ODA compliance against CIS standards.

For reminder the CIS, Center for Internet Security, produces security guidelines for components such as Linux, databases and much more. As member of the CIS, dbi services proposes security audits based on these guidelines (https://www.dbi-services.com/offering/services/it-security-services/)

On ODA there is now, out of the box, a „small“ Python script, which allows to check the CIS „status“ on OS level for your ODA.

To do so you can simply go in /opt/oracle/oak/bin and run the script cis.py.

IMG_0181

Sorry, as I couldn‘t take my ODA with me in Nürnberg, I do have only a picture of the script so far ;-)

There are 2 good news when running this script on an brand new installed ODA.

  1. The ODA is out of the box already 41% CIS compliant, which is not bad at all
  2. The ODA is only 41% compliant with CIS, which means there still room for improvement and some work for sysadmins like me ;-)

More seriously a real added value of this tool is that beside doing the compliance check it provides a features to fix some/all points. The advantage here is that in comparison of manual changes it makes sure it does not change anything which ODA relies on and breaks it.

What about the database?

Of course ODA is not only an Operating System. At the end there are databases running on it. So the question is: if the cis.py performs checks on OS level, what can I do on DB one?

For this Oracle released of free (yes free) tool called DBSAT, which stands for Database Security Assessment Tool.
https://www.oracle.com/database/technologies/security/dbsat.html

This tools runs against your database and make CIS but also some GDPR compliance checks providing a report. The report can be export in JSON for activities such as cross databases check.

More blogs to follow about these tools, once back from the DOAG…but now it‘s slowly time for the traditional Schweitzer Abend and some party ;-)

Cet article ODA and CIS / GDPR features est apparu en premier sur Blog dbi services.

Partner Webcast – Automate Data Integration with Oracle Data Integration Platform Cloud

Oracle Data Integration Platform Cloud is a part of the Platform service offerings in Oracle Cloud Integration family, offering a Comprehensive Cloud Based Platform Solution for all of your Data...

We share our skills to maximize your revenue!
Categories: DBA Blogs

[BLOG] FAQ’s: Oracle GoldenGate 12c: Part IV

Online Apps DBA - Tue, 2018-11-20 04:26

Do you want to know what are the most frequently asked questions in Oracle GoldenGate? If yes, then visit: https://k21academy.com/goldengate30 & know the answers to the questions such as: ✔What is reverse utility? ✔What are the types extract checkpoint positions? ✔What are the recovery types in the extract process & much more… Do you want […]

The post [BLOG] FAQ’s: Oracle GoldenGate 12c: Part IV appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Filtering Data Providers with Compound Conditions in Visual Builder

Shay Shmeltzer - Mon, 2018-11-19 13:53

I posted in the past a basic introduction to filtering lists in Visual Builder - showing how to use the filterCriterion to filter the records shown in a ServiceDataProvider. Since I recorded this video, a few things changed, and I also saw several customers asking how can they use more complex conditions that involve more than one filter.

In the video below I show how to define a basic filter with the latest versions (note that in VB 18.4.1 you no longer need to surround value with quotes ""), and then I show how to create a more complex condition that involves two filter criteria and set them to work with either an or or an and operator.

When you are using business components in Visual Builder, the filterCriterion is translated into a "q" parameter that is passed to the GET method (more about this q query parameter here). If you find that you are not getting the records you are expecting, check out the browser's Network tab to see what query parameter was passed in your call to the REST service (intro to this debugging technique here).

As you'll see the filterCriterion contains an array of "criteria" so you can specify several of them. In the video I'm using an approach that Brian Fry showed me that gives you a more declarative way to populate the array dragging and dropping multiple "criteria type" variables into the same array.

Note however that the important thing is what is actually being populated in the json file that defines the action. You should go into this view and verify that you have the right structure there. You can also directly manipulate that source to achieve the filter you need.

As you'll see in the video there are some cases where the design time for this filterCriterion adds an entry into the JSON that might not match what you want (we are tracking this issue). So as mentioned - if things don't work as expected direct manipulation of the JSON might be required. 

Categories: Development

Power BI Report Server Kerberos Setup

Yann Neuhaus - Mon, 2018-11-19 10:46
In the case you have the following configuration and requirements

Your Power BI, paginated mobile KPI reports are published on your on premise Power BI Report Server (Named i.e. SRV-PBIRS), their data sources is an Analysis Services located on another server (Named i.e. SRV-SSASTAB\INST01, INST01 being the named instance) and you want to track/monitor who is accessing the data on Analysis Services or you have row level security constraints.

In such case, if you have configure your Analysis connection using Windows integrated authentication, and therefore you have to setup the Kerberos delegation from the Power BI Report Server to the Analysis Services Server. If you don’t do that, your users will be faced to the famous “double-hop” issue and they won’t be able to access the Analysis Services data or you won’t be able to identify who is consuming your data on Analysis Services side.

In order to setup the Kerberos delegation you can follow steps below:

1- Be sure to be Domain Admin or to have sufficient permission to create SPN and change the Service Account and /or computer settings in the Active Directory. 2- On your Power BI Report Server  server, get the Service account starting your Power BI Report Server service.

(i.e. SvcAcc_PBIRS)

pic1

Note: If you do not have used domain service account you will have to use the server name instead in the following steps.

While you are on the server, make first a backup and then change the rsreportserver.config configuration file (for a default installation it is located here: C:\Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer). Add the parameter <RSWindowsNegotiate/>> in the <AuthenticationType> xml node

pic2

Save an close the file.

3. On your Analysis Services server, get the server account starting your Analysis Services service

(i.e. SvcAcc_SSASTab)

pic3

Note: If you do not have used domain service account you will have to use the server name instead in the following steps.

4- Open a PowerShell console on a  any domain computer with your domain admin user.

Execute the following command to get SPN associated with your Power BI Report Service account:

Setspn -l PBIRSServiceAccount

If you do not see the following entry

HTTP/SRV-PBIRS.Domain
HTTP/SRV-PBIRS

Execute the following commands to register HTTP SPN for your server FQDN and NETBIOS names

SetSpn -a http/SRV-PBIRS.Domain PBIRSServiceAccount
SetSpn -a http/SRV-PBIRS PBIRSServiceAccount

Note that you have to replace the SRV-PBIRS.Domain with the URL (without the virtual directory) of your Power BI Report Server site in the case you defined an URL or you defined an HTTPS  URL with a certificate.

Check again if you the SPN’s are correctly registered after.

 5- In your PowerShell session, execute the following command to get SPN registered for your Analysis Services Service account:
SetSpn -l SvcAcc_SSASTab

You should see the following entries, meaning your Analysis Services SPN’s have been registered:

MSOLAPSVC.3/ SRV-SSASTAB:INST01
MSOLAPSVC.3/ SRV-SSASTAB.domain:INST01

If not run the following commands:

SetSpn -a MSOLAPSVC.3/ SRV-SSASTAB:INST01
SetSpn -a MSOLAPSVC.3/ SRV-SSASTAB.domain:INST01

Furthermore, in the case you installed your Analysis Services with a named instance (in my example INST01), check if SPN’s have been registered for the Analysis Services SQL Browser Service (the server name is used in that case for the SQL Server Browser is started with a local service account):

SetSpn -l SRV-SSASTAB

You should see the following entries:

MSOLAPDisco.3/SRV-SSASTAB
MSOLAPDisco.3/SRV-SSASTAB.domain

If not, run the following command:

SetSpn -a MSOLAPDisco.3/SRV-SSASTAB
SetSpn -a MSOLAPDisco.3/SRV-SSASTAB.domain

 

6- For the next step you have to open Active Directory administration.

Open the properties of your Power BI Report Server service account.In the Account tab, uncheck the “Account is sensitive and cannot be delegated”

pic4

Then in the Delegation tab, select the “Trust this user for delegation to any service”. If you have security constraint with the delegation, it is recommended to use the third option and to select the only services you defined in step 5.

pic5

 7- Finally restart you Power BI Report Server Service.

Cet article Power BI Report Server Kerberos Setup est apparu en premier sur Blog dbi services.

Table order

Jonathan Lewis - Mon, 2018-11-19 07:30

Over the last few days I’ve highlighted on Twitter a couple of older posts showing show a change in the order that tables appear in the from clause could affect the execution plan of a query. In one case the note was purely theoretical describing a feature of the way the optimizer works with simple query blocks, in the other case the note was about an anomaly with table elimination that could appear with both “ANSI” and “traditional” Oracle syntax.

Here’s another note that might be more generally useful – an example of an odd side effect of ordering and “ANSI” syntax, with a suggestion for a pattern for writing ANSI SQL. It’s based on a test I wrote to play around with a problem that showed up on the Oracle database forum more than six years ago and shows a strange inconsistency. The setup is a little long-winded as the example involves 4 tables, so I’ll leave the script to create, load and index the tables to the end of the note. Here’s the query that introduced the problem; it’s a fairly straightforward 4 table join with two (left) outer joins:


select
        episode.episode_id , episode.cross_ref_id , episode.date_required ,
        product.number_required,
        request.site_id
from
        episode
left join
        request
on      episode.cross_ref_id = request.cross_ref_id
join
        product
ON      episode.episode_id = product.episode_id
left join
        product_sub_type
ON      product.prod_sub_type_id = product_sub_type.prod_sub_type_id
where
        episode.department_id = 2
and     product.status = 'I'
order by
        episode.date_required
;

And here’s the execution plan:


----------------------------------------------------------------------------------------
| Id  | Operation            | Name    | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |         | 33333 |  1725K|       | 17135   (4)| 00:00:01 |
|   1 |  SORT ORDER BY       |         | 33333 |  1725K|  2112K| 17135   (4)| 00:00:01 |
|*  2 |   HASH JOIN OUTER    |         | 33333 |  1725K|  1632K| 16742   (4)| 00:00:01 |
|*  3 |    HASH JOIN         |         | 33333 |  1236K|       |   436   (8)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| PRODUCT | 33333 |   325K|       |    54  (12)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL| EPISODE |   300K|  8203K|       |   375   (6)| 00:00:01 |
|   6 |    TABLE ACCESS FULL | REQUEST |  4000K|    57M|       | 13542   (3)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EPISODE"."CROSS_REF_ID"="REQUEST"."CROSS_REF_ID"(+))
   3 - access("EPISODE"."EPISODE_ID"="PRODUCT"."EPISODE_ID")
   4 - filter("PRODUCT"."STATUS"='I')
   5 - filter("EPISODE"."DEPARTMENT_ID"=2)

The first thing you’ll notice, of course, is that the plan reports a three table join. Thanks to various referential integrity constraints, the absence of the table in the final select list, and the nature of the join to that table, the optimizer has determined that the product_sub_type table could be eliminated from the join without changing the result set.

What you can’t tell from the plan is that there’s an index on the request table that holds all the columns needed to satisfy the query, and an index fast full scan on the index would be significantly more efficient than the tablescan that appears at operation 6.

Having noticed from the plan that product_sub_type is redundant, the obvious thing to do before investigating further is to rewrite the statement to remove the table . Here’s the resulting query, with execution plan:

----------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |             | 33333 |  1725K|       |  5525   (6)| 00:00:01 |
|   1 |  SORT ORDER BY         |             | 33333 |  1725K|  2112K|  5525   (6)| 00:00:01 |
|*  2 |   HASH JOIN OUTER      |             | 33333 |  1725K|  1632K|  5132   (7)| 00:00:01 |
|*  3 |    HASH JOIN           |             | 33333 |  1236K|       |   436   (8)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | PRODUCT     | 33333 |   325K|       |    54  (12)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL  | EPISODE     |   300K|  8203K|       |   375   (6)| 00:00:01 |
|   6 |    INDEX FAST FULL SCAN| IX4_REQUEST |  4000K|    57M|       |  1932   (7)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EPISODE"."CROSS_REF_ID"="REQUEST"."CROSS_REF_ID"(+))
   3 - access("EPISODE"."EPISODE_ID"="PRODUCT"."EPISODE_ID")
   4 - filter("PRODUCT"."STATUS"='I')
   5 - filter("EPISODE"."DEPARTMENT_ID"=2)

So – when the optimizer removes the product_sub_type from the query the plan reports a tablescan of request, when we remove product_sub_type the plan reports an index fast full scan of an appropriate index – which appears to be roughly one seventh (1,932/13,542) of the size of the table. It’s a little surprising that the optimizer didn’t get it right by itself – but “ANSI” style SQL often displays quirky little side effects because of the way the optimizer transforms it into traditional Oracle style.

We could stop at that point, of course, but then you’d wonder about the significance of the title of the post. So let’s play around with the join order of the original query, without removing the product_sub_type table.

As a general strategy (though not an absolute rule) I tend to arrange code so that outer joins don’t appear before “inner” joins. In this example that means I would have written the original statement as follows:


select
        episode.episode_id, episode.cross_ref_id, episode.date_required,
        product.number_required,
        request.site_id
from
        episode
join
        product
ON      product.episode_id = episode.episode_id
left join
        product_sub_type
ON      product_sub_type.prod_sub_type_id = product.prod_sub_type_id
left join
        request
on      request.cross_ref_id = episode.cross_ref_id
where
        episode.department_id = 2
and     product.status        = 'I'
order by
        episode.date_required
;

All I’ve done is move the join between episode and product up the SQL, following it with the outer join to product_sub_type, finally closing with the outer join between episode and request. Here’s the execution plan – which you might expect to look exactly like the original plan:


----------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |             | 33333 |  1725K|       |  5525   (6)| 00:00:01 |
|   1 |  SORT ORDER BY         |             | 33333 |  1725K|  2112K|  5525   (6)| 00:00:01 |
|*  2 |   HASH JOIN OUTER      |             | 33333 |  1725K|  1632K|  5132   (7)| 00:00:01 |
|*  3 |    HASH JOIN           |             | 33333 |  1236K|       |   436   (8)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | PRODUCT     | 33333 |   325K|       |    54  (12)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL  | EPISODE     |   300K|  8203K|       |   375   (6)| 00:00:01 |
|   6 |    INDEX FAST FULL SCAN| IX4_REQUEST |  4000K|    57M|       |  1932   (7)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("REQUEST"."CROSS_REF_ID"(+)="EPISODE"."CROSS_REF_ID")
   3 - access("PRODUCT"."EPISODE_ID"="EPISODE"."EPISODE_ID")
   4 - filter("PRODUCT"."STATUS"='I')
   5 - filter("EPISODE"."DEPARTMENT_ID"=2)

The product_sub_type table has been eliminated and we’re doing an index fast full scan of the ix4_request index instead of a tablescan of the much larger request table.

tl;dr

Changing the order of the tables in an ANSI join – especially when there are outer joins involved – could make a significant difference to the way the query is transformed and optimised. While it is nice to write the table ordering so that “chains” of joins are easily visible, bear in mind that re-ordering the join to postpone outer joins may be enough to help the optimizer produce a better execution plan.

Footnote

If you want to play around with the example, here’s the code to create and load the tables. The code doesn’t follow my usual style as most of it is cut-n-pasted from the Oracle forum thread:

create table episode (
        episode_id number (*,0),
        department_id number (*,0),
        date_required date,
        cross_ref_id varchar2 (11),
        padding varchar2 (80),
        constraint pk_episode primary key (episode_id)
)
;

create table product_sub_type (
        prod_sub_type_id number (*,0),
        sub_type_name varchar2 (20),
        units varchar2 (20),
        padding varchar2 (80),
        constraint pk_product_sub_type primary key (prod_sub_type_id)
)
;

create table product (
        product_id number (*,0),
        prod_type_id number (*,0),
        prod_sub_type_id number (*,0),
        episode_id number (*,0),
        status varchar2 (1),
        number_required number (*,0),
        padding varchar2 (80),
        constraint pk_product primary key (product_id),
        constraint nn_product_episode check (episode_id is not null) 
)
;

alter table product add constraint fk_product 
        foreign key (episode_id) references episode (episode_id)
;

alter table product add constraint fk_prod_sub_type
        foreign key (prod_sub_type_id) references product_sub_type (prod_sub_type_id)
;

create table request (
        request_id number (*,0),
        department_id number (*,0),
        site_id number (*,0),
        cross_ref_id varchar2 (11),
        padding varchar2 (80),
        padding2 varchar2 (80),
        constraint pk_request primary key (request_id),
        constraint nn_request_department check (department_id is not null),
        constraint nn_request_site_id check (site_id is not null)
)
;

prompt  ===================
prompt  Loading episode ...
prompt  ===================

insert /*+ append */ into episode
with generator as 
(select rownum r
          from (select rownum r from dual connect by rownum <= 1000) a,
               (select rownum r from dual connect by rownum <= 1000) b,
               (select rownum r from dual connect by rownum <= 1000) c
         where rownum <= 1e6
       ) 
select r, 2,
    sysdate + mod (r, 14),
    to_char (r, '0000000000'),
    'ABCDEFGHIJKLMNOPQRSTUVWXYZ' || to_char (r, '000000')
  from generator g
where g.r <= 3e5
/ 

commit;

prompt  ============================
prompt  Loading product_sub_type ...
prompt  ============================

insert /*+ append */ into product_sub_type
with generator as 
(select rownum r
          from (select rownum r from dual connect by rownum <= 1000) a,
               (select rownum r from dual connect by rownum <= 1000) b,
               (select rownum r from dual connect by rownum <= 1000) c
         where rownum <= 1e6
       ) 
select r, 
       to_char (r, '000000'),
       to_char (mod (r, 3), '000000'),
       'ABCDE' || to_char (r, '000000')
  from generator g
where g.r <= 15
/ 

commit;

prompt  ===================
prompt  Loading product ...
prompt  ===================

insert /*+ append */ into product
with generator as 
(select rownum r
          from (select rownum r from dual connect by rownum <= 1000) a,
               (select rownum r from dual connect by rownum <= 1000) b,
               (select rownum r from dual connect by rownum <= 1000) c
         where rownum <= 1e6
       ) 
select r, mod (r, 12) + 1, mod (r, 15) + 1, mod (r, 300000) + 1,
       decode (mod (r, 3), 0, 'I', 1, 'C', 2, 'X', 'U'),
       dbms_random.value (1, 100), NULL
  from generator g
where g.r <= 1e5
/ 

commit;

prompt  ===================
prompt  Loading request ...
prompt  ===================

insert /*+ append */ into request
with generator as 
(select rownum r
          from (select rownum r from dual connect by rownum <= 1000) a,
               (select rownum r from dual connect by rownum <= 1000) b,
               (select rownum r from dual connect by rownum <= 1000) c
         where rownum <= 1e7
       ) 
select 
        r, mod (r, 4) + 1, 1, to_char (r, '0000000000'),
        'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567890123456789' || to_char (r, '000000'),
        'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789012345678' || to_char (r, '000000')
  from generator g
where g.r <= 4e6
/ 

commit;

create index ix1_episode_cross_ref on episode (cross_ref_id);

create index ix1_product_episode on product (episode_id);
create index ix2_product_type on product (prod_type_id);

create index ix1_request_site on request (site_id);
create index ix2_request_dept on request (department_id);
create index ix3_request_cross_ref on request (cross_ref_id);
create index ix4_request on request (cross_ref_id, site_id);

exec dbms_stats.gather_schema_stats ('test_user')

Note that there is a call to gather_schema_stats() at the end, rather than a set of 4 calls to gather_table_stats(); you may want to change this. The entire data set, including indexes, will need about 1.5GB of free space.

 

Hotel Julian Reimagines Modern Luxury Experience with Oracle Hospitality

Oracle Press Releases - Mon, 2018-11-19 07:00
Press Release
Hotel Julian Reimagines Modern Luxury Experience with Oracle Hospitality Oracle Hospitality OPERA, RES 3700 and MICROS Workstation 7 Power New Chicago Hotel

Redwood Shores, Calif.—Nov 19, 2018

Hotel Julian, the highly-anticipated luxury-lifestyle property from Oxford Capital Group, LLC., open to the public since October 2018, offers guests a remixed luxury experience supported by Oracle Hospitality solutions.  

“Hotel Julian embodies the spirit of luxury hospitality and we are excited to provide guests with a unique experience that blends architecture, history and art,” said Ludmila Chervona, Hotel Julian general manager. “With OPERA property management software, we are able to provide guests with curated service that will help establish Hotel Julian as a new landmark property in Chicago.”

Hotel Julian uses Oracle Hospitality OPERA property management software to create a unique experience for guests staying in one of the 218 available rooms. On-property restaurant About Last Knife also features Oracle Hospitality RES 3700 and MICROS Workstations to streamline operations and provide engaging service for diners.

“Hotel guests today are looking for more than the basics–they are looking for an unforgettable experience from the moment they check in through to departure,” said Greg Webb, senior vice president and general manager, Oracle Hospitality. “Battle tested at the largest hotel chains in the world, Oracle provides independent operators the technology needed to deliver engaging guest experiences that will drive long-term loyalty.”

Oracle Hospitality OPERA provides a full-featured property management system that enables hoteliers to deliver world-class guest service and increase operational efficiency across the property. In addition, Oracle Hospitality RES 3700, a leading food and beverage management platform, provides enterprise point-of-sale and back-office functionality to support a wide range of food and beverage operations.

Hotel Julian purchased Oracle Hospitality OPERA, RES 3700 and MICROS Hardware in April 2018.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Hospitality

Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our solutions include platforms for property management, point-of-sale, distribution, reporting and analytics all delivered from the cloud to lower IT cost and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and continues to serve as a foundation for industry innovation.

For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality

About Oxford Capital Group, LLC and Oxford Hotels & Resorts, LLC

Oxford Capital Group, LLC is a national real estate investment, development and management firm. Oxford Hotels and Resorts, LLC is its wholly owned hotel operating affiliate. Oxford, its affiliates, and principals have been involved in approximately $3 billion of real estate and private equity investments, including approximately 13,000 hotel rooms and over 2,000 senior housing units. The firm's primary areas of focus are hospitality, mixed-use, senior housing, multifamily, urban retail, parking, and other operationally intensive forms of real estate.  This includes health, fitness, spa, athletic and sports/entertainment destination clubs, and entertainment destination outlets including hotel rooftop venues. Oxford's geographic focus includes the nation’s top cities, such as Chicago, New York City, metro Washington D.C., Boston, Los Angeles, San Francisco, and other select markets with unique attributes including Charleston, SC, New Orleans, LA, and a variety of markets throughout Florida. In addition to prominent national projects throughout the country, Hotel Julian Millennium Park is Oxford’s 13th concept. Its growing hospitality brand collection includes ALK, The Godfrey Hotel & Cabanas, Cass, Essex, SX, Felix, Godfrey, I|O, LondonHouse, LH, Julian, WTR Pool & Grill, and SpaBoutique. For information, visit www.oxford-capital.com or www.ohrllc.com

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Oracle Certified to Secure Government Communications

Oracle Press Releases - Mon, 2018-11-19 07:00
Press Release
Oracle Certified to Secure Government Communications With FIPS-140-2 Validation, Government Agencies Can Leverage Oracle Communications Enterprise Session Border Controllers to Enable Highly Secure Real-time Communications

Redwood Shores, Calif.—Nov 19, 2018

Oracle has received Federal Information Processing Standard (FIPS)-140-2 validation for five of its Enterprise Session Border Controllers (E-SBC). FIPS-140-2 is a standard for cryptographic modules protecting sensitive information in computer and telecommunication systems for U.S. government and military use. With this validation, government agencies and customers in highly-regulated industries can utilize Oracle’s E-SBCs to enable the security, privacy and integrity of real-time communications traffic passing through their networks.

"Oracle has a history of securing real-time communications and the FIPS 140-2 validation reinforces our reputation for delivering purpose-built, industry leading products that can defend against a myriad of cyberattacks,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “We are proud to have met this high standard so we can continue to help customers protect their most sensitive information -- whether a government agency, federal contractor or an enterprise operating in a highly-regulated industry."

Jointly operated by the National Institute of Standards and Technology (NIST) and the Communications Security Establishment of Canada, the Cryptographic Module Validation Program administers FIPS 140-2 validation, which has become the standard for regulated industries such as finance, healthcare and utilities.

Oracle E-SBCs can protect IP communications networks from new and evolving security threats such as Shellshock, POODLE, Heartbleed and BEAST. These FIPS 140-2 validated Oracle E-SBCs can now be procured by U.S. federal agencies, including the Department of Defense (DoD):


"While FIPS 140-2 is applicable to U.S. federal agencies for cryptographic modules, it is becoming widely viewed by non-government sectors as the preeminent security benchmark,” said Johna Till Johnson, CEO of Nemertes Research. “Obtaining FIPS 140-2 demonstrates the ability to provide a compliant and secure network offering that can help customers mitigate dangerous cyber threats, and protect valuable assets"

Contact Info
Katie Barron
Oracle
+1.202.904.1138
Katie.Barron@oracle.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. For more information, visit our website at www.oracle.com/communications.

To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

[BLOG] Art of Oracle Database Performance Tuning: Things You Must Know

Online Apps DBA - Mon, 2018-11-19 04:29

Do you want to learn How to Improve and multiply the performance of a database significantly? If yes, then visit: https://k21academy.com/tuning13 where we have covered: ✔Hardware & DBMS Tuning ✔Query Optimisation ✔Process of Performance Tuning & much more… Do you want to learn How to Improve and multiply the performance of a database significantly? If […]

The post [BLOG] Art of Oracle Database Performance Tuning: Things You Must Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Is there too much memory for my SQL Server instance?

Yann Neuhaus - Mon, 2018-11-19 02:56

Is there too much memory for my SQL Server instance? This is definitely an uncommon question I had to deal with of my customers a couple of weeks ago. Usually DBAs complain when they don’t have enough memory for environments they have to manage and the fact is SQL Server (like other SGBDRs) provides a plenty of tools for memory pressure troubleshooting. But what about of the opposite? This question raised in a context of an environment that includes a lot of virtual database servers (> 100) on the top of VMWare where my customer was asked for lowering the SQL Server instance memory reservations when possible in order to free memory from ESX hosts.

blog 147 - 0 - banner

Let’s start with the sys.dm_os_sys_memory. This is the first one that my customer wanted to dig into. This DMV may be helpful to get a picture of the overall system state including external memory conditions at the operating system level and the physical limits of the underlying hardware.

select 
	available_physical_memory_kb / 1024 AS available_physical_memory_MB,
	total_physical_memory_kb / 1024 AS total_physical_memory_MB,
	total_page_file_kb / 1024 AS total_page_file_MB,
	available_page_file_kb / 1024 AS available_page_file_MB,
	system_cache_kb / 1024 AS system_cache_MB
from sys.dm_os_sys_memory;

 

blog 147 - 1 - sys.dm_os_memory

 

But in the context of my customer, it partially helped to figure out SQL Server memory consumption instances because we didn’t really face any environments under pressure here.

However, another interesting DMV we may rely on is sys.dm_os_sys_info. We may also use their counterparts with perfmon counters \Memory Manager\Target Server Memory (KB) and \Memory Manager\Total Server Memory (KB) as shown below:

select 
	physical_memory_kb / 1024 AS physical_memory_MB,
	visible_target_KB,
	committed_kb,
	committed_target_kb
from sys.dm_os_sys_info;

 

blog 147 - 9 - sys.dm_os_sys_info

The concept of committed and Target commit memory are important here to figure out how SQL Server deals with memory space. The commit memory represents the physical memory allocated by the SQL Server process whereas the Target memory is the amount of memory SQL Server tries to maintain as committed memory regarding different factors described in the BOL. Chances are the latter is closed to the max server memory value in most of scenarios from my experience by the way.

But relying blindly on the committed memory may contribute to misinterpretation about what SQL Server is really consuming for a specific period of time. Indeed, let’s say my SQL Server instance is capped to 2GB and after the daily business workload here the corresponding figures. Let’s say the values in the context of my customer were of a different order of magnitude but this demo will help to figure out the issue that motivated this write-up:

select 
	physical_memory_kb / 1024 AS physical_memory_MB,
	visible_target_KB,
	committed_kb,
	committed_target_kb
from sys.dm_os_sys_info;

 

blog 147 - 10 - sys.dm_os_sys_info_after_daily_workload

The committed memory is about 365MB and by far from the configured max server memory parameter value – 2GB. But now let’s the database maintenance kicks-in. Usually this is a nightly and a daily or weekly basis job that includes generally a rebuilding index task that consists in reading generally all the data structures to get external fragmentation values through the DMF sys.dm_db_index_physical_stats(). This operation can touch structures that are not used during daily business and may have a huge impact on the buffer pool. In my case here the new memory state after executing this maintenance task:

blog 147 - 11 - sys.dm_os_sys_info_after_maintenance

The game has changed here because SQL Server has committed all the memory until reaching the max server memory value. This time we may go through the sys.dm_os_memory_clerks DMV to get details from different memory clerks of my SQL Server instance. pages_kb column is used because SQL instances run with SQL 2014 version.

SELECT
	[type],
	pages_kb / 1024 AS size_MB
FROM sys.dm_os_memory_clerks
WHERE memory_node_id = 0
ORDER BY size_MB DESC

 

blog 147 - 12 - sys.dm_os_memory_clerks

So, from now the committed memory has good chance to keep closed from the max server memory value while in fact the daily business workload won’t probably need all this memory allocated to the SQL Server process. This exactly why my customer asked me for a way to get a more realistic picture of memory consumption of its SQL Server instances during daily business by excluding the nightly database maintenance workload.

We went through a solution that consisted in freeing up the committed memory before starting the journey and to leave the memory grow up gradually until reaching its maximum usage. It is worth noting that there is no easy way. as far as I know, to free up the committed memory and SQL Server may decrease it only if the corresponding target server memory value is lower. From my experience this situation is more an exception than the rule of thumbs and therefore it is difficult to rely on it. One potential workaround might be to restart the SQL Server instance(s) but in the case of my customer restarting the database servers was not an option and we looked into a solution that forced SQL Server making room by setting up the max server memory closed to the min server memory value. Don’t get me wrong, I don’t consider this as a best practice but more as an emergency procedure because as restarting a SQL Server instance, it may lead to a temporary impact but in a high number of magnitudes especially whether the workload performance is directly tied to the buffer cache state (warm vs cold). In addition, I would say that scaling the max server memory value with only the daily business workload may be controversial in many ways and in fact we have to consider some tradeoffs here. In the context of my customer, the main goal was to release “unused memory” from SQL Server instances during daily business to free up memory from VMWare ESX hosts but there is no free lunch. For instance, the nightly basis workload execution may become suddenly higher in duration if there is less room to work in memory. Another direct side effect of working with less memory might be the increase of I/O operations from the storage layout. In a nutshell, there is no black or white solution and we have to deal with what we consider at the best solution for the specific context.

See you!

 

 

 

Cet article Is there too much memory for my SQL Server instance? est apparu en premier sur Blog dbi services.

removing control characters from text

Tom Kyte - Sun, 2018-11-18 22:06
Is there a routine in Oracle that can test for and remove unwanted characters in a text string, ie control characters?
Categories: DBA Blogs

Presenting at AUSOUG Connect 2018 Conference in Melbourne, 21 November 2018.

Richard Foote - Sun, 2018-11-18 06:25
  After initially not being in a position to make it this year, I will now be presenting at the AUSOUG Connect 2018 Conference in Melbourne this coming Wednesday, 21 November 2018. My presentation will be: “12c Release 2 and 18c – New Indexing Related Features” Oracle Database 12.2 and 18.3 releases have introduced a number […]
Categories: DBA Blogs

Exadata Cloud Machine - Hardware capacity

Syed Jaffar - Sun, 2018-11-18 04:13
Everyone who is aware and utilizes Exadata Database Machine is certainly knew the performance it can deliver. I have involved in many Exadata migration projects, and witnessed how customers gained the database performance and satisfied post migration. I am not talking about the cost, the need etc., as a technical guy, I knew the capabilities of the box and how it can benefit customers to fulfill their need and future demand.

We all knew about Cloud technologies, how every software company and organization trying to race with the trend and need of cloud technologies. In some countries, the cloud adoption is bit slower compare to the other part of the world. But, gradually majority of the companies would be adopting cloud technologies, this is for sure. Certainly, cloud has its own share of advantages and disadvantages. Whoever utilizes it smartly, can gain much flexibility and benefits.

To ensure and meet customers demand to have Exadata availability on cloud, Oracle started Exadata Cloud services offering to facilitate Exadata machine on cloud. Still, some organization couldn't adopt cloud due to industry regulations, corporate policies, security compliance etc. Therefore, Oracle announced Exadata Cloud Machine availability. With this model, customers who want to have cloud on-premises with Exadata hardware, can go for this model.

I would like to highlight the hardware capabilities that Exadata Could Machine (ExaCM) offers.


  • 40Gbps InfiniBand Networking
  • Ultra-fast NVMe Flash storage
  • Up to 257GB/sec Throughput
  • Up to 3.6 Million 8k I/Os per sec
  • 1/4 millisecond response time
  • Fastest Compute 
  • Fastest x86 processors
  • Large Memory Capacity - 720GB per compute node
  • Complete Redundancy
Soon will talk more about Exadata Cloud Machine migrations. Stayed tuned and hunger for knowledge.

Polymorphic Table Functions

Bar Solutions - Sun, 2018-11-18 00:04

I have been working on a presentation on Polymorphic Table Functions. During this time I was looking for a real use case for Polymorphic Table Functions. I came up with an example which is not very useful in real life, but very useful to explain the technique.
At my current job I came across a piece of code that I had to copy and adjust to fit the needs for that specific case. The idea was always the same, I get a table with semi-colon separated values in one column that have to be split into the correct number of columns before checking the data to the current data in a specific table.
I thought: ‘Maybe I can solve this copy-paste-adjust process by using a Polymorphic Table Function.’

Let’s first set the current scene.
We have two tables. The well known EMP and DEPT tables.

create table emp
(empno    number(4) not null
,ename    varchar2(10)
,job      varchar2(9)
,mgr      number(4)
,hiredate date
,sal      number(7, 2)
,comm     number(7, 2)
,deptno   number(2)
)
/
create table dept
(deptno number(2)
,dname  varchar2(14)
,loc    varchar2(14)
)
/        

And we add the well known data:

insert into emp values (7369, 'SMITH',  'CLERK',     7902, to_date('17-DEC-1980', 'DD-MON-YYYY'),  800, null, 20);
insert into emp values (7499, 'ALLEN',  'SALESMAN',  7698, to_date('20-FEB-1981', 'DD-MON-YYYY'), 1600,  300, 30);
insert into emp values (7521, 'WARD',   'SALESMAN',  7698, to_date('22-FEB-1981', 'DD-MON-YYYY'), 1250,  500, 30);
insert into emp values (7566, 'JONES',  'MANAGER',   7839, to_date('2-APR-1981', 'DD-MON-YYYY'),  2975, null, 20);
insert into emp values (7654, 'MARTIN', 'SALESMAN',  7698, to_date('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);
insert into emp values (7698, 'BLAKE',  'MANAGER',   7839, to_date('1-MAY-1981', 'DD-MON-YYYY'),  2850, null, 30);
insert into emp values (7782, 'CLARK',  'MANAGER',   7839, to_date('9-JUN-1981', 'DD-MON-YYYY'),  2450, null, 10);
insert into emp values (7788, 'SCOTT',  'ANALYST',   7566, to_date('09-DEC-1982', 'DD-MON-YYYY'), 3000, null, 20);
insert into emp values (7839, 'KING',   'PRESIDENT', null, to_date('17-NOV-1981', 'DD-MON-YYYY'), 5000, null, 10);
insert into emp values (7844, 'TURNER', 'SALESMAN',  7698, to_date('8-SEP-1981', 'DD-MON-YYYY'),  1500,    0, 30);
insert into emp values (7876, 'ADAMS',  'CLERK',     7788, to_date('12-JAN-1983', 'DD-MON-YYYY'), 1100, null, 20);
insert into emp values (7900, 'JAMES',  'CLERK',     7698, to_date('3-DEC-1981', 'DD-MON-YYYY'),   950, null, 30);
insert into emp values (7902, 'FORD',   'ANALYST',   7566, to_date('3-DEC-1981', 'DD-MON-YYYY'),  3000, null, 20);
insert into emp values (7934, 'MILLER', 'CLERK',     7782, to_date('23-JAN-1982', 'DD-MON-YYYY'), 1300, null, 10);
insert into dept values (10, 'ACCOUNTING', 'NEW YORK');
insert into dept values (20, 'RESEARCH',   'DALLAS');
insert into dept values (30, 'SALES',      'CHICAGO');
insert into dept values (40, 'OPERATIONS', 'BOSTON');

We get data from a different system, which is in a semi-colon separated format. So we load that in a couple of staging tables:

create table empstg
(line varchar2(4000)
)
/
create table deptstg
(line varchar2(4000)
)
/

And then we add some data:

insert into empstg values ('7369;SMITH;CLERK;7902;17121980;800; ;20');
insert into empstg values ('7499;ALLEN;SALESMAN;7698;20021981;1600; 300;30');
insert into empstg values ('7521;WARD;SALESMAN;7698;22021981;1250; 500;30');
insert into empstg values ('7566;JONES;MANAGER;7839;02041981; 2975; ;20');
insert into empstg values ('7654;MARTIN;SALESMAN;7698;28091981;1250;1400;30');
insert into empstg values ('7698;BLAKE;MANAGER;7839;01051981; 2850; ;30');
insert into empstg values ('7782;CLARK;MANAGER;7839;09061981; 2450; ;10');
insert into empstg values ('7788;SCOTT;ANALYST;7566;09121982;3000; ;20');
insert into empstg values ('7839;KING;PRESIDENT; ;17111981;5000; ;10');
insert into empstg values ('7844;TURNER;SALESMAN;7698;08091981; 1500;0;30');
insert into empstg values ('7876;ADAMS;CLERK;7788;12011983;1100; ;20');
insert into empstg values ('7900;JAMES;CLERK;7698;03121981;  950; ;30');
insert into empstg values ('7902;FORD;ANALYST;7566;03121981; 3000; ;20');
insert into empstg values ('7934;MILLER;CLERK;7782;23011982;1300; ;10');
insert into empstg values ('2912;BAREL;DEVELOPER;7839;29122017;4000; ;50');

insert into deptstg values ('10;ACCOUNTING;NEW YORK');
insert into deptstg values ('20;RESEARCH;DALLAS');
insert into deptstg values ('30;SALES;NORTH CAROLINA');
insert into deptstg values ('40;OPERATIONS;BOSTON');
insert into deptstg values ('50;DEVELOPMENT;SAN FRANCISCO');

To process the data and merge it into the main tables we use a package. We could have used a merge statement, but this implies all the rows that are the same will get an update anyway, which results in a lot of journal-ling data which is done by triggers. Using the EMP and DEPT tables this wouldn’t be too much of a problem, but we are talking 250k+ rows each time (at least once a day).
So we want a little more control and only insert/update when it’s really necessary.

create or replace package process_stg is
  procedure dept;
  
  procedure emp;
end process_stg;
/
create or replace package body process_stg is
  failure_in_forall exception;
  pragma exception_init(failure_in_forall, -24381);
  c_limit     constant number := 10;
  procedure dept is
    cursor c_inserts is
      with import as
       (select trim(regexp_substr(line, '[^;]+', 1, 1)) deptno
              ,trim(regexp_substr(line, '[^;]+', 1, 2)) dname
              ,trim(regexp_substr(line, '[^;]+', 1, 3)) loc
          from deptstg stg)
      select i.deptno
            ,i.dname
            ,i.loc
        from import i
        left outer join dept d on (d.deptno = i.deptno)
       where 1 = 1
         and d.deptno is null;
    cursor c_updates is
      with import as
       (select trim(regexp_substr(line, '[^;]+', 1, 1)) deptno
              ,trim(regexp_substr(line, '[^;]+', 1, 2)) dname
              ,trim(regexp_substr(line, '[^;]+', 1, 3)) loc
          from deptstg stg)
      select i.deptno
            ,i.dname
            ,i.loc
        from import i
        join dept d on (d.deptno = i.deptno)
       where 1 = 1
         and (   coalesce(d.dname,'-NULL') <> coalesce(i.dname,'-NULL-')
              or coalesce(d.loc, '-NULL')  <> coalesce(i.loc, '-NULL-')
             );
    type data_t is table of c_inserts%rowtype index by pls_integer;
    l_data data_t;
  begin
    open c_inserts;
    loop
      fetch c_inserts bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            insert into dept(deptno, dname, loc)
            values (l_data(indx).deptno, l_data(indx).dname, l_data(indx).loc);
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_inserts;
--
    open c_updates;
    loop
      fetch c_updates bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            update dept
               set dname = l_data(indx).dname
                 , loc = l_data(indx).loc
             where 1=1
               and deptno = l_data(indx).deptno;
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_updates;
  end dept;

  procedure emp is
    cursor c_inserts is
      with import as
       (select trim(regexp_substr(line, '[^;]+', 1, 1)) empno   
              ,trim(regexp_substr(line, '[^;]+', 1, 2)) ename   
              ,trim(regexp_substr(line, '[^;]+', 1, 3)) job     
              ,trim(regexp_substr(line, '[^;]+', 1, 4)) mgr     
              ,trim(regexp_substr(line, '[^;]+', 1, 5)) hiredate
              ,trim(regexp_substr(line, '[^;]+', 1, 6)) sal     
              ,trim(regexp_substr(line, '[^;]+', 1, 7)) comm    
              ,trim(regexp_substr(line, '[^;]+', 1, 8)) deptno  
          from empstg stg)
      select i.empno   
            ,i.ename   
            ,i.job     
            ,i.mgr     
            ,i.hiredate
            ,i.sal     
            ,i.comm    
            ,i.deptno  
        from import i
        left outer join emp e on (e.empno = i.empno)
       where 1 = 1
         and e.empno is null;
    cursor c_updates is
      with import as
       (select trim(regexp_substr(line, '[^;]+', 1, 1)) empno   
              ,trim(regexp_substr(line, '[^;]+', 1, 2)) ename   
              ,trim(regexp_substr(line, '[^;]+', 1, 3)) job     
              ,trim(regexp_substr(line, '[^;]+', 1, 4)) mgr     
              ,trim(regexp_substr(line, '[^;]+', 1, 5)) hiredate
              ,trim(regexp_substr(line, '[^;]+', 1, 6)) sal     
              ,trim(regexp_substr(line, '[^;]+', 1, 7)) comm    
              ,trim(regexp_substr(line, '[^;]+', 1, 8)) deptno  
          from empstg stg)
      select i.empno   
            ,i.ename   
            ,i.job     
            ,i.mgr     
            ,i.hiredate
            ,i.sal     
            ,i.comm    
            ,i.deptno  
        from import i
        left outer join emp e on (e.empno = i.empno)
       where 1 = 1
         and (   coalesce(e.ename,    '-NULL') <> coalesce(i.ename,    '-NULL')
              or coalesce(e.job,      '-NULL') <> coalesce(i.job,      '-NULL')
              or          e.mgr                <>          i.mgr               
              or coalesce(to_char(e.hiredate,'DDMMYYYY'), '-NULL') <> coalesce(i.hiredate, '-NULL')
              or          e.sal                <>          i.sal               
              or          e.comm               <>          i.comm               
              or          e.deptno             <>          i.deptno            
             );
    type data_t is table of c_inserts%rowtype index by pls_integer;
    l_data data_t;
  begin
    open c_inserts;
    loop
      fetch c_inserts bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            insert into emp(empno, ename, job, mgr, hiredate, sal, comm, deptno)
            values (l_data(indx).empno, l_data(indx).ename, l_data(indx).job, l_data(indx).mgr, to_date(l_data(indx).hiredate, 'DDMMYYYY'), l_data(indx).sal, trim(l_data(indx).comm), l_data(indx).deptno);
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_inserts;
--
    open c_updates;
    loop
      fetch c_updates bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            update emp
              set ename       = l_data(indx).ename    
                , job         = l_data(indx).job      
                , mgr         = l_data(indx).mgr      
                , hiredate    = to_date(l_data(indx).hiredate, 'DDMMYYYY') 
                , sal         = l_data(indx).sal      
                , comm        = l_data(indx).comm     
                , deptno      = l_data(indx).deptno   
            where 1=1
              and empno       = l_data(indx).empno;
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_updates;
  end emp;
end process_stg;
/

As you can see, the code, especially for the cursors, is pretty much the same. Only difference is the number of columns that are generated from the semi-colon separated line.
I really don’t like to do the same thing over and over again, especially when the only difference is the number of columns and their names. But since this is what changes between the tables I think there is no way of making this generic in 12c or earlier. But then 18c came into play and they provide us with Polymorphic Table Functions.

This is what the documentation says (summary):
Polymorphic Table Functions
Polymorphic Table Functions (PTF) are user-defined functions that can be invoked in the FROM clause.
They are capable of processing tables whose row type is not declared at definition time and producing a
result table whose row type may or may not be declared at definition time. Polymorphic Table Functions
allow application developers to leverage the long-defined dynamic SQL capabilities to create powerful
and complex custom functions.

In my own words: Call a function, supplying a table and get a set of columns back. You can supply the names (and number) of columns as a parameter. Also, these columns don’t have to exist in the table, you can create them on the fly. That is exactly what I need. I have different tables with pretty much the same layout but the results I need are completely different.
So I came up with the following Polymorphic Table Function to do what I want. First there is the specification of the package. What I need is the DESCRIBE function (which is mandatory) and a procedure to fetch the rows, where I can alter the results.

create or replace package separated_ptf is
  function describe(tab  in out dbms_tf.table_t
                   ,cols in dbms_tf.columns_t default null) return dbms_tf.describe_t;

  procedure fetch_rows;
end separated_ptf;
/

Then there is the implementation of the package:

create or replace package body separated_ptf as
  g_colcount pls_integer; -- save the number of columns requested
  g_colname  varchar2(128); -- save the name of the first column
  function describe(tab  in out dbms_tf.table_t
                   ,cols in dbms_tf.columns_t default null) return dbms_tf.describe_t as
    -- metadata for column to add
    l_new_col dbms_tf.column_metadata_t;
    -- table of columns to add
    l_new_cols dbms_tf.columns_new_t; -- := DBMS_TF.COLUMNS_NEW_T();
  begin
    -- Mark the first column ReadOnly and don't display it anymore
    tab.column(1).for_read := true;
    tab.column(1).pass_through := false;
    -- Save the name of the first column for use in the fetch_rows procedure
    g_colname := tab.column(1).description.name;
    -- Save the number of columns for use in the fetch_rows procedure
    g_colcount := cols.count;
    -- Add the new columns, as specified in the cols parameter
    for indx in 1 .. cols.count loop
      -- define metadata for column named cols(indx)
      -- that will default to a datatype of varchar2 with
      -- a length of 4000
      l_new_col := dbms_tf.column_metadata_t(name => cols(indx));
      -- add the new column to the list of columns new columns
      l_new_cols(l_new_cols.count + 1) := l_new_col;
    end loop;
    -- Instead of returning NULL we will RETURN a specific
    -- DESCRIBE_T that adds new columns
    return dbms_tf.describe_t(new_columns => l_new_cols);
  end;

  procedure fetch_rows is
    -- define a table type of varchar2 tables
    type colset is table of dbms_tf.tab_varchar2_t index by pls_integer;
    -- variable to hold the rowset as retrieved
    l_rowset   dbms_tf.row_set_t;
    -- variable to hold the number of rows as retrieved
    l_rowcount pls_integer;
    -- variable to hold the new values
    l_newcolset colset;
  begin
    -- fetch rows into a local rowset
    -- at this point the rows will have columns
    -- from the the table/view/query passed in
    dbms_tf.get_row_set(l_rowset, l_rowcount);
    -- for every row in the rowset...
    for rowindx in 1 .. l_rowcount loop
      -- for every column
      for colindx in 1 .. g_colcount loop
        -- split the row into separate values
        -- splitting the regexp way: http://nuijten.blogspot.com/2009/07/splitting-comma-delimited-string-regexp.html
        l_newcolset(colindx)(rowindx) := trim(regexp_substr(json_value(dbms_tf.row_to_char(l_rowset, rowindx), '$.' || g_colname)
                                                      ,'[^;]+'
                                                      ,1
                                                      ,colindx));
      end loop; -- every column
    end loop; -- every row in the rowset
    -- add the newly populated columns to the rowset
    for indx in 1 .. g_colcount loop
      dbms_tf.put_col(columnid => indx, collection => l_newcolset(indx));
    end loop;
  end;
end separated_ptf;
/

After creating this Polymorphic Table Function we need an interface to use it in a SQL statement:

-- create a 'wrapper' function for the polymorphic table function
CREATE OR REPLACE FUNCTION separated_fnc(p_tbl IN TABLE,  
                cols columns DEFAULT NULL)
   RETURN TABLE PIPELINED
   ROW POLYMORPHIC USING separated_ptf;
/

Now, with this wrapper function in place we can start using it:

select *
  from separated_fnc(deptstg, columns(deptno, dname, loc))
/

Which is a lot easier than:

select trim(regexp_substr(line, '[^;]+', 1, 1)) deptno
      ,trim(regexp_substr(line, '[^;]+', 1, 2)) dname
      ,trim(regexp_substr(line, '[^;]+', 1, 3)) loc
  from deptstg stg
/

And similarly we can access the same code to retrieve data from the other table:

select *
  from separated_fnc(empstg, columns(empno, ename, job, mgr, hiredate, sal, comm, deptno))
/

That looks pretty much the same as the other one, but is definitely a lot simpler than

select trim(regexp_substr(line, '[^;]+', 1, 1)) empno
      ,trim(regexp_substr(line, '[^;]+', 1, 2)) ename
      ,trim(regexp_substr(line, '[^;]+', 1, 3)) job
      ,trim(regexp_substr(line, '[^;]+', 1, 4)) mgr
      ,trim(regexp_substr(line, '[^;]+', 1, 5)) hiredate
      ,trim(regexp_substr(line, '[^;]+', 1, 6)) sal
      ,trim(regexp_substr(line, '[^;]+', 1, 7)) comm
      ,trim(regexp_substr(line, '[^;]+', 1, 8)) deptno
  from empstg stg
/

The new implementation of my package is now like this:

create or replace package body process_stg is
  failure_in_forall exception;
  pragma exception_init(failure_in_forall, -24381);
  c_limit     constant number := 10;
  procedure dept is
    cursor c_inserts is
      with import as
       (select *
          from separated_fnc(deptstg, columns(deptno, dname, loc))
       )
      select i.deptno
            ,i.dname
            ,i.loc
        from import i
        left outer join dept d on (d.deptno = i.deptno)
       where 1 = 1
         and d.deptno is null;
    cursor c_updates is
      with import as
       (select *
          from separated_fnc(deptstg, columns(deptno, dname, loc))
       )
      select i.deptno
            ,i.dname
            ,i.loc
        from import i
        join dept d on (d.deptno = i.deptno)
       where 1 = 1
         and (   coalesce(d.dname,'-NULL') <> coalesce(i.dname,'-NULL-')
              or coalesce(d.loc, '-NULL')  <> coalesce(i.loc, '-NULL-')
             );
    type data_t is table of c_inserts%rowtype index by pls_integer;
    l_data data_t;
  begin
    open c_inserts;
    loop
      fetch c_inserts bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            insert into dept(deptno, dname, loc)
            values (l_data(indx).deptno, l_data(indx).dname, l_data(indx).loc);
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_inserts;
--
    open c_updates;
    loop
      fetch c_updates bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            update dept
               set dname = l_data(indx).dname
                 , loc = l_data(indx).loc
             where 1=1
               and deptno = l_data(indx).deptno;
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_updates;
  end dept;

  procedure emp is
    cursor c_inserts is
      with import as
       (select *
          from separated_fnc(empstg, columns(empno,ename,job,mgr,hiredate,sal,comm,deptno))
       )
      select i.empno   
            ,i.ename   
            ,i.job     
            ,i.mgr     
            ,i.hiredate
            ,i.sal     
            ,i.comm    
            ,i.deptno  
        from import i
        left outer join emp e on (e.empno = i.empno)
       where 1 = 1
         and e.empno is null;
    cursor c_updates is
      with import as
       (select *
          from separated_fnc(empstg, columns(empno,ename,job,mgr,hiredate,sal,comm,deptno))
       )
      select i.empno   
            ,i.ename   
            ,i.job     
            ,i.mgr     
            ,i.hiredate
            ,i.sal     
            ,i.comm    
            ,i.deptno  
        from import i
        left outer join emp e on (e.empno = i.empno)
       where 1 = 1
         and (   coalesce(e.ename,    '-NULL') <> coalesce(i.ename,    '-NULL')
              or coalesce(e.job,      '-NULL') <> coalesce(i.job,      '-NULL')
              or          e.mgr                <>          i.mgr               
              or coalesce(to_char(e.hiredate,'DDMMYYYY'), '-NULL') <> coalesce(i.hiredate, '-NULL')
              or          e.sal                <>          i.sal               
              or          e.comm               <>          i.comm               
              or          e.deptno             <>          i.deptno            
             );
    type data_t is table of c_inserts%rowtype index by pls_integer;
    l_data data_t;
  begin
    open c_inserts;
    loop
      fetch c_inserts bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            insert into emp(empno, ename, job, mgr, hiredate, sal, comm, deptno)
            values (l_data(indx).empno, l_data(indx).ename, l_data(indx).job, l_data(indx).mgr, to_date(l_data(indx).hiredate, 'DDMMYYYY'), l_data(indx).sal, trim(l_data(indx).comm), l_data(indx).deptno);
        exception
          when failure_in_forall then 
            dbms_output.put_line(q'[error]');
           dbms_output.put_line(sqlcode);
           dbms_output.put_line(sqlerrm);
            for indx in 1 .. sql%bulk_exceptions.count loop
              dbms_output.put_line('Error ' || indx || ' occurred on index ' || sql%bulk_exceptions(indx).error_index);
              dbms_output.put_line('Oracle error is ' ||
                                sqlerrm(-1 * sql%bulk_exceptions(indx).error_code));
            end loop;
            null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_inserts;
--
    open c_updates;
    loop
      fetch c_updates bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            update emp
              set ename       = l_data(indx).ename    
                , job         = l_data(indx).job      
                , mgr         = l_data(indx).mgr      
                , hiredate    = to_date(l_data(indx).hiredate, 'DDMMYYYY')
                , sal         = l_data(indx).sal      
                , comm        = l_data(indx).comm     
                , deptno      = l_data(indx).deptno   
            where 1=1
              and empno       = l_data(indx).empno;
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_updates;
  end emp;
end process_stg;
/

There is absolutely some improvement possible to the current implementation, like supporting duplicate separators, making the column to be split up a parameter, making the separator character a parameter as well, but that is a nice project for a later time.

I hope it all makes a bit of sense. If you have any improvements, don’t hesitate to comment.

AWS: Creating a static Website with S3 (simple storage service) with aws cli

Dietrich Schroff - Sat, 2018-11-17 13:34
There is a nice tutorial how to create a static webpage with using Amazon S3:
https://docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsiteOnS3Setup.html

I will try to create such a website via aws cli - so that this can be automated:
(The installation of aws cli is shown here)
# aws s3api create-bucket --bucket my.webtest --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1

{

    "Location": "http://my.webtest.s3.amazonaws.com/"

}

Then create a website.json file:

$ cat website.json 

{

    "IndexDocument": {

         "Suffix": "index.html"

     },

     "ErrorDocument": {

          "Key": "error.html"

     }

 }

and run

$ aws s3api put-bucket-website --bucket my.webtest --website-configuration file://website.json

After that the web console should show:
and

Next step is to create the file policy.json:

$ cat policy.json 

{

   "Version":"2012-10-17",

   "Statement":[{

     "Sid":"PublicReadForGetBucketObjects",

         "Effect":"Allow",

       "Principal": "*",

       "Action":["s3:GetObject"],

       "Resource":["arn:aws:s3:::my-webtest/*"

       ]

     }

   ]

 }

and run

aws s3api put-bucket-policy --bucket my.webtest --policy file://policy.json

You can check via:
$ aws s3api get-bucket-policy --bucket my.webtest

{

    "Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"PublicReadForGetBucketObjects\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"s3:GetObject\",\"Resource\":\"arn:aws:s3:::my.webtest/*\"}]}"

}
Via the web console:
 Then upload you html page:

$ aws s3 cp TestWebPage.html s3://my.webtest/index.html

upload: ./TestWebPage.html to s3://my.webtest/index.html  
 And here we go:


That was easy. Ok - a DNS resolution via Amazon route 53 is missing, but with these commands you are able to deploy a static website without clicking around...



Postings related to AWS:









Migration from 11g to 12c change execution plan(Adaptative plan)

Tom Kyte - Fri, 2018-11-16 15:06
Hi, we are working on a PeopleSoft Migration and Database too. We're migrating Oracle 11.2.0.3 to 12.2.0.1, so we have an issiue with a PeopleSoft Query. The query on actual database enviroment(11.2.0.3), have a excecution plan with minimal cost ...
Categories: DBA Blogs

pushing predicate into union-all view

Tom Kyte - Fri, 2018-11-16 15:06
Hi, LiveSQL link: https://livesql.oracle.com/apex/livesql/s/hjml6z0yg45qznob5sebg53vk I have the big table with an index on ID: <code> create table tst1 as select level id, mod(level, 10) code from dual connect by level < 1000000; create...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator