Quantcast
Channel: Oracle – Official Pythian® Blog
Viewing all 301 articles
Browse latest View live

Oracle OpenWorld 2015 – Bloggers Meetup

$
0
0

Oracle OpenWorld Bloggers Meetup Many of you are coming to San Francisco next week for Oracle OpenWorld 2015 and many of you have already booked time on your calendars on Wednesday evening before the appreciation event. You are right — the Annual Oracle Bloggers Meetup, one of your favorite events of the OpenWorld, is happening at usual place and time!

What: Oracle Bloggers Meetup 2015.

When: Wed, 28-Oct-2015, 5:30pm.

Where: Main Dining Room, Jillian’s Billiards @ Metreon, 101 Fourth Street, San Francisco, CA 94103 (street view). Please comment with “COUNT ME IN” if you’re coming — we need to know the attendance numbers.


As usual, Oracle Technology Network and Pythian sponsor the venue, drinks and cool fun social stuff. This year we are dropping a cool app and resurrecting traditions — you know what it means and if not, come and learn. All blogger community participants are invited — self qualify is what that means ;).

As usual, vintage t-shirts, ties, or bandanas from previous meetups will make you look cool — feel free to wear them.

For those of you who don’t know the history: The Bloggers Meetup during Oracle OpenWorld was started by Mark Rittman and continued by Eddie Awad, and then I picked up the flag in 2009. This year we have Oracle Technology Network taking more leadership on the organization of the event in addition to just being a “corporate sponsor”.

The meetups have been a great success for making new friends and catching up with the old, so let’s keep them this way! To give you an idea, here are the photos from the OOW08 Bloggers Meetup (courtesy of Eddie Awad) and OOW09 meetup blog post update from myself, and a super cool video by a good blogging friend, Bjorn Roest from OOW13.

While the initial meetings were mostly targeted to Oracle database folks, guys and gals from many Oracle technologies — Oracle database, MySQL, Apps, Sun technologies, Java and more join in the fun. All bloggers are welcome. Last year we crossed 150 attendees and I expect this year we may set a new record.

If you are planning to attend, please comment here with the phrase “COUNT ME IN”. This will help us ensure we have the attendance numbers right. Please provide your blog URL (or whatever you consider a replacement of that — I’ll leave it to your interpretation) with your comment — it’s a Bloggers Meetup after all! Please do make sure you comment here if you are attending so that we have enough room, food, and (most importantly) drinks.

Of course, do not forget to blog, tweet, linkedin, G+, instagram, email and just talk about this year’s bloggers meetup. See you there — it will be fun!

 

Discover more about our expertise in the world of Oracle.


Errors in a Pluggable Database?

$
0
0

 

There might be a situation where executing some DDL in pluggable database may cause the following error:

ORA-65040: operation not allowed from within a pluggable database

This error could occur if a tablespace is being dropped from within PDB and this tablespace is a former default tablespace having some of the system objects. Even system objects cannot be moved with simple alter statements from within PDBs.

So in order to move these objects from within PDBs, you should be using procedure dbms_pdb.exec_as_oracle_script which is undocumented so far.

For example:

exec dbms_pdb.exec_as_oracle_script(‘alter table <owner>.<table_name> move tablespace <tablespace name>’);

From My Oracle Support, Doc ID 1943303.1 lists:

—   This procedure enables execution of certain restricted statements (most DDLs) on metadata-linked objects, from within a PDB.

 

Discover more about our expertise in the world of Oracle.

Log Buffer #448: A Carnival of the Vanities for DBAs

$
0
0

This Log Buffer is dedicated to the top quality news from the arena of Oracle, SQL Server and MySQL.

Oracle:

  • We had a question on AskTom the other day, asking us to explain what a “latch” was.
  • Jonathan Lewis thinks column groups can be amazingly useful in helping the optimizer to generate good execution plans because of the way they supply better details about cardinality.
  • Today it’s all about developing software that makes access to your product easier.
  • Steve Jobs sets a great perspective on the journey of simplicity. It starts from simple, goes through complexity and ends up in simplicity.
  • AWR period comparison is pretty easy if you have access to the two periods in the same AWR repository.

SQL Server:

  • Understanding Peer-to-Peer Transactional Replication, Part 2.
  • Knee-Jerk Wait Statistics : PAGELATCH.
  • Stairway to Columnstore Indexes Level 5: Adding New Data To Columnstore Indexes.
  • SQL Server Reporting Services General Best Practices.
  • Hello Azure: Azure IaaS – Getting Started.
  • A Skills Roadmap for DBAs in the Cloud Era.

MySQL:

  • MySQL Performance: 1M QPS on mixed OLTP_RO with MySQL 5.7 GA.
  • Deploying MongoDB, MySQL, PostgreSQL & MariaDB’s MaxScale in 40min.
  • ClusterControl Tips & Tricks: wtmp Log Rotation Settings for Sudo User.
  • Setting-up second mysql instance & replication on Linux in 10 steps.
  • s9s Tools and Resources: ‘Become a MySQL DBA’ series, ClusterControl 1.2.11 release, and more!

Oracle Upgrade Failures due to METHOD_OPT and XDBCONFIG

$
0
0

Background

I recently experienced a problem when upgrading an old Oracle 10.2.0.4 database to 11.2.0.4 that had no matches in a My Oracle Support (MOS) or Google search. The problem presented itself initially when upgrading as the following error was reported by the upgrade script:

ERROR at line 1:
ORA-20001: invalid column name or duplicate columns/column groups/expressions
in method_opt
ORA-06512: at "SYS.UTL_RECOMP", line 865
ORA-06512: at line 4

 

Initially, the problem was reported in the upgrade log file for the ORACLE_OCM schema which is not critical. However, it later caused the XDB component to become invalid and consequently other components that depend on XDB to also become invalid. The error reported when trying to validate XDB was:

Warning: XDB now invalid, could not find xdbconfig

 

Even if not upgrading, this error could be encountered when trying to install or re-install the XDB component in an 11g database. XDB is a mandatory component as of Oracle 12c but is optional with 11g and below. Hence, it’s possible to experience this same problem if you’re trying to add the XDB component to an 11g database that didn’t already have it.

 

“Warning: XDB now invalid, could not find xdbconfig”

Several MOS documents already exist describing the error “Warning: XDB now invalid, could not find xdbconfig”. Those include:

  • Utlrp.sql results to “Warning: XDB Now Invalid, Could Not Find Xdbconfig” (Doc ID 1631290.1)
  • XDB Invalid after Utlrp during Activation of Extended Datatypes (Doc ID 1667689.1)
  • XDB Invalid After utl32k.sql during activation of extended datatypes (Doc ID 1667684.1)

Unfortunately, none of those applied as either the cause or the solution to the problem I encountered. Either going through the XDB installation logs or simply manually running utlrp.sql shows that the xdbconfig is missing due to the “ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt” error.

For example:

SQL> @?/rdbms/admin/utlrp

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2015-11-09 11:36:22

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#
DECLARE
*
ERROR at line 1:
ORA-20001: invalid column name or duplicate columns/column groups/expressions
in method_opt
ORA-06512: at "SYS.UTL_RECOMP", line 865
ORA-06512: at line 4



TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2015-11-09 11:36:23

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0


Function created.


PL/SQL procedure successfully completed.


Function dropped.

Warning: XDB now invalid, could not find xdbconfig
ORDIM INVALID OBJECTS: CARTRIDGE - INVALID - PACKAGE BODY
ORDIM INVALID OBJECTS: SI_IMAGE_FORMAT_FEATURES - INVALID - VIEW
ORDIM INVALID OBJECTS: SI_IMAGE_FORMAT_FEATURES - INVALID - SYNONYM
ORDIM INVALID OBJECTS: SI_IMAGE_FRMT_FTRS - INVALID - SYNONYM
ORDIM INVALID OBJECTS: ORDUTIL - INVALID - PACKAGE BODY
ORDIM INVALID OBJECTS: ORDIMG_PKG - INVALID - PACKAGE BODY
ORDIM INVALID OBJECTS: ORDIMGEXTCODEC_PKG - INVALID - PACKAGE BODY
ORDIM INVALID OBJECTS: ORDX_FILE_SOURCE - INVALID - PACKAGE BODY
ORDIM INVALID OBJECTS: DICOM_IMAGE105_T - INVALID - TYPE
ORDIM INVALID OBJECTS: exifMetadata243_T - INVALID - TYPE
ORDIM INVALID OBJECTS: PATIENT_STUDY129_T - INVALID - TYPE
ORDIM INVALID OBJECTS: GENERAL_SERIES134_T - INVALID - TYPE
ORDIM INVALID OBJECTS: GENERAL_IMAGE154_T - INVALID - TYPE
ORDIM INVALID OBJECTS: TiffIfd244_T - INVALID - TYPE
ORDIM INVALID OBJECTS: ExifIfd245_T - INVALID - TYPE
ORDIM INVALID OBJECTS: GpsIfd246_T - INVALID - TYPE
ORDIM INVALID OBJECTS: CODE_SQ103_T - INVALID - TYPE
ORDIM INVALID OBJECTS: iptcMetadataType94_T - INVALID - TYPE
ORDIM INVALID OBJECTS: IMAGE_PIXEL163_T - INVALID - TYPE
ORDIM registered 0 XML schemas.
The following XML schemas are not registered:
http://xmlns.oracle.com/ord/dicom/UIDdefinition_1_0
http://xmlns.oracle.com/ord/dicom/anonymity_1_0
http://xmlns.oracle.com/ord/dicom/attributeTag_1_0
http://xmlns.oracle.com/ord/dicom/constraint_1_0
http://xmlns.oracle.com/ord/dicom/datatype_1_0
http://xmlns.oracle.com/ord/dicom/manifest_1_0
http://xmlns.oracle.com/ord/dicom/mapping_1_0
http://xmlns.oracle.com/ord/dicom/mddatatype_1_0
http://xmlns.oracle.com/ord/dicom/metadata_1_0
http://xmlns.oracle.com/ord/dicom/orddicom_1_0
http://xmlns.oracle.com/ord/dicom/preference_1_0
http://xmlns.oracle.com/ord/dicom/privateDictionary_1_0
http://xmlns.oracle.com/ord/dicom/rpdatatype_1_0
http://xmlns.oracle.com/ord/dicom/standardDictionary_1_0
http://xmlns.oracle.com/ord/meta/dicomImage
http://xmlns.oracle.com/ord/meta/exif
http://xmlns.oracle.com/ord/meta/iptc
http://xmlns.oracle.com/ord/meta/ordimage
http://xmlns.oracle.com/ord/meta/xmp
Locator INVALID OBJECTS: ALL_SDO_GEOM_METADATA - INVALID - VIEW
Locator INVALID OBJECTS: USER_SDO_INDEX_METADATA - INVALID - VIEW
Locator INVALID OBJECTS: ALL_SDO_INDEX_METADATA - INVALID - VIEW
Locator INVALID OBJECTS: USER_SDO_INDEX_INFO - INVALID - VIEW
Locator INVALID OBJECTS: ALL_SDO_INDEX_INFO - INVALID - VIEW
Locator INVALID OBJECTS: USER_SDO_LRS_METADATA - INVALID - VIEW
Locator INVALID OBJECTS: SDO_LRS_TRIG_INS - INVALID - TRIGGER
Locator INVALID OBJECTS: SDO_LRS_TRIG_DEL - INVALID - TRIGGER
Locator INVALID OBJECTS: SDO_LRS_TRIG_UPD - INVALID - TRIGGER
Locator INVALID OBJECTS: USER_SDO_TOPO_INFO - INVALID - VIEW
Locator INVALID OBJECTS: ALL_SDO_TOPO_INFO - INVALID - VIEW
Locator INVALID OBJECTS: USER_SDO_TOPO_METADATA - INVALID - VIEW
Locator INVALID OBJECTS: ALL_SDO_TOPO_METADATA - INVALID - VIEW
Locator INVALID OBJECTS: MDPRVT_IDX - INVALID - PACKAGE BODY
Locator INVALID OBJECTS: PRVT_IDX - INVALID - PACKAGE BODY
Locator INVALID OBJECTS: SDO_TPIDX - INVALID - PACKAGE BODY
Locator INVALID OBJECTS: SDO_INDEX_METHOD_10I - INVALID - TYPE BODY
Locator INVALID OBJECTS: SDO_GEOM - INVALID - PACKAGE BODY
Locator INVALID OBJECTS: SDO_3GL - INVALID - PACKAGE BODY

PL/SQL procedure successfully completed.

SQL>

 

Hence the ORA-20001 error is the true cause of the XDB problem.

 

“ORA-20001: Invalid column name or duplicate columns/column groups/expressions in method_opt”

Searching My Oracle Support (MOS) for this error leads to the following notes:

  • Gather Table Statistics Fails With ORA-20001 ORA-06512 On “invalid Column Name” (Doc ID 1668579.1).
  • 11i – 12 Gather Schema Statistics fails with Ora-20001 errors after 11G database Upgrade (Doc ID 781813.1).
  • Gather Schema Statistics Fails With Error For APPLSYS Schema (Doc ID 1393184.1).
  • Performance Issue Noted in Trading Partner Field of Invoice Workbench (Doc ID 1343489.1).

Unfortunately, those are all related to specific tables from Oracle Applications Technology Stack, Oracle EBS, or Oracle Payables – none of those were applicable in my case. In my case the application was home grown.

Hence, MOS and Google searches returned no relevant results.

 

The Root Cause & Solution

The root cause of this problem was the METHOD_OPT parameter of DBMS_STATS.

The METHOD_OPT parameter is related to how optimizer statistic histograms are collected for columns. METHOD_OPT is set using DBMS_STATS.SET_PARAM and can be queried through DBMS_STATS.GET_PARAM or directly from the underlying base table SYS.OPTSTAT_HIST_CONTROL$.

For example:

SQL> exec DBMS_STATS.SET_PARAM('METHOD_OPT','FOR ALL COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL> select DBMS_STATS.GET_PARAM('METHOD_OPT') from dual;

DBMS_STATS.GET_PARAM('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

SQL> select sname, spare4 from SYS.OPTSTAT_HIST_CONTROL$ where sname = 'METHOD_OPT';

SNAME                          SPARE4
------------------------------ ----------------------------------------
METHOD_OPT                     FOR ALL COLUMNS SIZE AUTO

SQL>

 

The actual root cause of the ORA-20001 error and all of the subsequent failures and invalid components is that in the problematic database, the METHOD_OPT was set to the rarely used and outdated setting of “FOR COLUMNS ID SIZE 1”. From the database that experienced this issue:

SQL> select DBMS_STATS.GET_PARAM('METHOD_OPT') from dual;

DBMS_STATS.GET_PARAM('METHOD_OPT')
--------------------------------------------------------------------------------
FOR COLUMNS ID SIZE 1

SQL>

 

The “FOR COLUMNS ID SIZE 1” setting was sometimes used in older versions of Oracle to prevent histogram buckets for being collected for primary keys and for plan stability through statistic changes. However, it should not be used for modern 11g or 12c databases. In fact it’s not even settable through the DBMS_STATS package after Oracle 10g.  Executing against an 11.2.0.4 database will give:

SQL> exec dbms_stats.set_param('METHOD_OPT','FOR COLUMNS ID SIZE 1');
BEGIN dbms_stats.set_param('METHOD_OPT','FOR COLUMNS ID SIZE 1'); END;

*
ERROR at line 1:
ORA-20001: method_opt should follow the syntax "[FOR ALL [INDEXED|HIDDEN]
COLUMNS [size_caluse]]" when gathering statistics on a group of tables
ORA-06512: at "SYS.DBMS_STATS", line 13179
ORA-06512: at "SYS.DBMS_STATS", line 13268
ORA-06512: at "SYS.DBMS_STATS", line 13643
ORA-06512: at "SYS.DBMS_STATS", line 31462
ORA-06512: at line 1

 

Though it can still be set in 11.2.0.4 by directly updating SYS.OPTSTAT_HIST_CONTROL$, which is definitely NOT recommended.

And of course this setting can be present in an 11g database that was upgraded from an older version such as a 10g release.

Reverting this parameter to “FOR ALL COLUMNS SIZE AUTO” resolved the ORA-20001 error with UTL_RECOMP allowing the XDB component to validate and become VALID in the registry and subsequently all other components that depend on XDB.

 

Conclusion

If upgrading an older databases to 11.2.0.4 (to remain on a supported version) it is prudent to check the setting of the METHOD_OPT parameter of the DBMS_STATS package. This isn’t mentioned in any of the pre-upgrade documents or checklists and isn’t caught by even the most recent version of Oracle’s Database Pre-Upgrade Utility (MOS Doc ID 884522.1) or the DB Upgrade/Migrate Diagnostic Information (MOS Doc ID 556610.1).

The check and solution are simple and should be incorporated into your own pre-upgrade procedure:

SQL> select DBMS_STATS.GET_PARAM('METHOD_OPT') from dual;

DBMS_STATS.GET_PARAM('METHOD_OPT')
--------------------------------------------------------------------------------
FOR COLUMNS ID SIZE 1

SQL> exec DBMS_STATS.SET_PARAM('METHOD_OPT','FOR ALL COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL> select DBMS_STATS.GET_PARAM('METHOD_OPT') from dual;

DBMS_STATS.GET_PARAM('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

SQL>

 

Discover more about our expertise in the world of Oracle

Oracle ASM Rebalance – Turn it up. To 11?

$
0
0

 

If you’ve ever seen or heard of the movie This is Spinal Tap then you have likely heard the phrase Turn it up to 11.

Why bring this up?

When ASM was introduced as a method for configuring storage for Oracle, one of the features was the ability to rebalance the data across all disks when disks were added or replaced.  The value used to control how aggressively Oracle rebalances the disks is the REBALANCE POWER. And yes, the maximum value for rebalancing was 11, as an homage to the movie.

Here is an example of a command to only rebalance a disk group:

 alter diskgroup data rebalance power 11; 

That is rather straightforward, so why blog about it?

The reason is that the maximum value for REBALANCE POWER changed with Oracle 11.2.0.2, as per the documentation for the ASM_POWER_LIMIT parameter.

From 11.2.0.2, the maximum value is no longer 11, but 1024.

I’ve asked a number of DBA’s about this, and it seems that knowledge of the rebalance power limit is not really too well known.

Why does it matter?

Imagine that an 11.2.0.4 ASM diskgroup has had disks replaced, and the task took longer than expected.

Now you want to speed up the rebalance of the disk group as much as possible:

 alter diskgroup data rebalance power 11; 

Will that bit of SQL do the job?

On 10g that would be fine. But on an 11.2.0.4 database that would set the POWER limit to 1.07% of the maximum allowed value, having little effect on how aggressive Oracle would be in rebalancing the disks.

The correct SQL in this case would be:

 alter diskgroup data rebalance power 1024; 

The following is a short demonstration of REBALANCE POWER on 10.2.0.4, 11.2.0.2 and 12.1.0.2 databases.  These examples just confirm the documented maximum values for REBALANCE POWER.

 

SQL> select version from v$instance;
VERSION
-----------------
10.2.0.4.0

SQL> alter diskgroup ASM_COOKED_FS rebalance power 12;
alter diskgroup ASM_COOKED_FS rebalance power 12
                                              *
ERROR at line 1:
ORA-15102: invalid POWER expression

SQL> alter diskgroup ASM_COOKED_FS rebalance power 11;

Diskgroup altered.

################################################

SQL> select version from v$instance;

VERSION
-----------------
11.2.0.2.0

SQL> alter diskgroup fra rebalance power 1025;
alter diskgroup fra rebalance power 1025
                                      *
ERROR at line 1:
ORA-15102: invalid POWER expression

SQL> alter diskgroup fra rebalance power 1024;

Diskgroup altered.

################################################

SQL> select version from v$instance;

VERSION
-----------------
12.1.0.2.0

SQL> alter diskgroup data rebalance power 1025;
alter diskgroup data rebalance power 1025
                                     *
ERROR at line 1:
ORA-15102: invalid POWER expression

SQL> alter diskgroup data rebalance power 1024;

Diskgroup altered.

 

Discover more about our expertise in the world of Oracle.

Log Buffer #449: A Carnival of the Vanities for DBAs

$
0
0

 

This Log Buffer Edition covers some of the niftiest blog posts from Oracle, SQL Server and MySQL.

Oracle:

  • OBIEE 11g and Essbase – Faking Federation Using the GoURL.
  • You can use one of these to link an exception name with an Oracle error number. Once you have done this, you can use the exception name in the exception block which follows the declaration.
  • This is a short post to help out any “googlers” looking for an answer to why their 12.1.0.5 EM Cloud Control install is failing in the make phase with ins_calypso.mk.
  • A short video that Jonathan Lewis did at the OTN lounge at RMOUG a couple of years ago has just been posted on YouTube. It’s about the improvements that appear in histograms in 12c.
  • Changing the name of the server or load-balancing server handling your BI Publisher workload for OEM can be done with a single EM CLI command.

SQL Server:

  • Manoj Pandey was going through some sample Scripts provided by Microsoft SQL Server team on their site, and was checking the JSON Sample Queries procedures views and indexes.sql script file.
  • When you open Excel 2016, go to the Power Pivot tab, click the Mange button to bring up Power Pivot window In Power Pivot, click Get External Data, choose From Other Sources Choose Microsoft Analysis Services.
  • After the introduction of the StringStoresCompatibilityLevel property in SSAS dimension after SQL Server 2012, SSAS database designer may try to create a MOLAP dimension with more unique values than what is allowed in previous SQL Server versions.
  • After SQL Server 2012 is released, SSISDB provides stored procedures to create SSIS package executions. There is one problem though.
  • This is the fourth installment in a blog series. The previous entry is located here Based on the previous blogs in this series, you should have gotten your database hosted in WASD by now & secured access to your server.

MySQL:

  • db4free.net finally runs MySQL 5.7 which was released on October 21.
  • What Does The Universal Scalability Law Reveal About MySQL?
  • There are many dimensions by which a DBMS can be better for small data workloads: performance, efficiency, manageability, usability and availability.
  • The new Mydumper 0.9.1 version, which includes many new features and bug fixes, is now available.
  • Nginx is well-known for its ability to act as a reverse-proxy with small memory footprint. It usually sits in the front-end web tier to redirect connections to available backend services, provided these passed some health checks.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Log Buffer #450: A Carnival of the Vanities for DBAs

$
0
0

This Log Buffer Editions picks few blog posts from Oracle, SQL Server and MySQL.

Oracle:

  • If you grant the DBA role to a user, Oracle also grants it the UNLIMITED TABLESPACE system privilege. If you then revoke the DBA role from this user, Oracle also revokes its UNLIMITED TABLESPACE system privilege.
  • Lost SYSMAN password OEM CC 12gR5.
  • How Terminal Emulation Assists Easy Data Management.
  • Using EMCLI List Verb to Get Detailed Information of EM Targets.
  • How to change apex_public_user password in ORDS.

SQL Server:

  • When the connection between you and the the target host are multiple servers across the continent, the latency will drive crazy mad.
  • SQLCMD and Batch File magic.
  • Greg Larson walks through the GUI installation process for SQL Server 2016 and explore these new installation options.
  • A Single-Parameter Date Range in SQL Server Reporting Services.
  • Is SQL Server killing your application’s performance?

MySQL:

  • MariaDB Galera Cluster 10.0.22 and Connector updates.
  • Building MaxScale from source on CentOS 7.
  • Orchestrator & Pseudo-GTID for binlog reader failover.
  • InnoDB holepunch compression vs the filesystem in MariaDB 10.1.
  • Open-sourcing PinLater: An asynchronous job execution system.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Advanced Compression Option Caveat in Oracle 12c

$
0
0

 

Oracle 12c introduced a new capability to move a partition online, without any interruptions to DML happening at the same time. But, there’s a catch. So far we’ve been able to use basic table compression without having to worry about any extra licensing – it was just a plain EE feature.

If you are planning to use the online partition move functionality, carefully check if you’re not using basic compression anywhere. For example:

create tablespace data datafile '+DATA' size 1g
/

create user foo identified by bar
default tablespace data
quota unlimited on data
/

grant create session, create table to foo
/

connect foo/bar

create table test (x int, y varchar2(20))
partition by range (x)
(
partition p1 values less than (100) tablespace data compress,
partition p2 values less than (200) tablespace data,
partition p3 values less than (300) tablespace data
)
/

So we now have this, and our licensing is still as we know it:

select partition_name, compression, compress_for from user_tab_partitions
/
PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------------------------
P1 ENABLED BASIC
P2 DISABLED
P3 DISABLED

We can use the new feature on partition p3:

alter table test move partition p3
online
/

Or, we can use the traditional means to compress the partition p2:

alter table test move partition p2
compress
/

But as soon as we do this move “online”, we are required to purchase the Advanced Compression Option:

alter table test move partition p2
compress
online
/

And, even sneakier:
alter table test move partition p1
online
/

Notice how partition p1 – which was previously compressed – also was online moved to a compressed format:

select partition_name, compression, compress_for from user_tab_partitions
/

PARTITION_NAME COMPRESS COMPRESS_FOR
—————————— ——– ——————————
P1 ENABLED BASIC
P2 ENABLED BASIC
P3 DISABLED

 

And that, therefore, required the Advanced Compression Option.

Also note that the usage of this is not caught by dba_feature_usage_statistics (tested on 12.1.0.2):

select name, currently_used from dba_feature_usage_statistics where lower(name) like '%compress%';

NAME CURRE
—————————————————————- —–
Oracle Advanced Network Compression Service FALSE
Backup ZLIB Compression FALSE
Backup BZIP2 Compression FALSE
Backup BASIC Compression FALSE
Backup LOW Compression FALSE
Backup MEDIUM Compression FALSE
Backup HIGH Compression FALSE
Segment Maintenance Online Compress FALSE
Compression Advisor FALSE
SecureFile Compression (user) FALSE
SecureFile Compression (system) FALSE
HeapCompression FALSE
Advanced Index Compression FALSE
Hybrid Columnar Compression FALSE
Hybrid Columnar Compression Row Level Locking FALSE

15 rows selected.

I also tried to bounce the database and the data wasn’t updated in my tests. I would’ve expected this to show up under “Segment Maintenance Online Compress”, but in my tests, it did not.

This feature restriction isn’t documented anywhere in the official product documentation – at least not that I could find. The only place where I could find this information was in this Oracle document.

 

Discover more about our experience in the world of Oracle.


ORACLE E-BUSINESS SUITE: VIRTUAL HOST NAMES SOLUTION

$
0
0

 

This blog post is a continuation to an earlier post about my musings on Oracle EBS support for virtual host names.

Actually, most parts of Oracle E-Business Suite work with virtual host names with out any problem. The only component that doesn’t work when using virtual host names are the Concurrent Managers. Concurrent Managers expect that the node name defined in the Concurrent Manager definition screen matches the host name FNDLIBR executable reads at the server level. Having the virtual host name as an alias in the hosts file in the server doesn’t cut it for the FNDLIBR executable. FNDLIBR reads the host name of the server using the Unix system call.

This behaviour of FNDLIBR can be hacked by overriding the Unix gethostname system call using LD_PRELOAD functionality. There is already a prebuilt program out there on github to achieve this functionality. It’s called fakehostname. I have tested this and verifies that it works with Oracle 11i, R12.0 and R12.1 version without any problem.

Here is a demo:


$ hostname
ebs
$ export LD_PRELOAD=/home/oracle/fakehost/libfakehostname.so.1
$ export MYHOSTNAME=ebsfakehost
$ hostname
ebsfakehost
$ export MYHOSTNAME=newebshost
$ hostname
newebshost

 

This utility helps in making concurrent managers thinking that it’s running on the virtual host by overriding the gethostname system call. This method of getting EBS to work with virtual hostnames doesn’t work any more with EBS R12.2. The reason for this EBS R12.2 is that it it’s shipped in a mix of 32bit and 64bit components. Earlier releases of EBS like 11i, 12.0 and 12.1 are 32bit only, even though they run on 64bit platforms. We can get EBS R12.2 working by having both 32bit and 64bit versions of the fakehostname library in the LD_PRELOAD, but EBS borks too many warning messages about not being able to load 32bit/64bit libraries, which defeats the whole purpose of having a simple solution.

I am working on another way of getting virtual host names working in EBS R12.2. I will post that in my next blog post. Stay tuned!

 

Discover more about our expertise in the world of Oracle.

Log Buffer #451: A Carnival of the Vanities for DBAs

$
0
0

 

The show goes on. This Log Buffer Edition picks some blogs which are discussing new and old features of Oracle, SQL Server and MySQL.

Oracle:

  • Directory Usage Parameters (ldap.ora) list the host names and port number of the primary and alternate LDAP directory servers.
  • Data Visualization Cloud Service (DVCS) is a new Oracle Cloud Service. It is a subset offering of the currently supported Business Intelligence Cloud Service (BICS).
  • ORA-24247: network access denied by access control list (ACL).
  • Latches are low level serialization mechanisms, which protect memory areas inside SGA. They are light wait and less sophesticated than enqueues and can be acquired and released very quickly.
  • handling disks for ASM – when DB, Linux and Storage admins work together.

SQL Server:

  • How to use the Performance Counter to measure performance and activity in Microsoft Data Mining.
  • Phil Factor demonstrates a PowerShell-based technique taking the tedium out of testing SQL DML.
  • Sandeep Mittal provides an introduction to the COALESCE function and shows us how to use it.
  • Hadoop many flavors of SQL.
  • Installing and Getting Started With Semantic Search.

MySQL:

  • Support for storing and querying JSON within SQL is progressing for the ANSI/ISO SQL Standard, and for MySQL 5.7.
  • Loss-less failover using MySQL semi-syncronous replication and MySQL Fabric!
  • Memory consumption The binary format of the JSON data type should consume more memory.
  • This post compares a B-Tree and LSM for read, write and space amplification. The comparison is done in theory and practice so expect some handwaving mixed with data from iostat and vmstat collected while running the Linkbench workload.
  • If you do not have a reliable network access (i.e. in some remote places) or need something really small to store your data you can now use Intel Edison.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

How to Troubleshoot an ORA-28030 Error

$
0
0

ORA-28030: Server encountered problems accessing LDAP directory service.
Cause: Unable to access LDAP directory service.
Action: Please contact your system administrator.

 

There are many reasons for causing this error when you are trying to login to the database with your oracle internet directory (OID) authentication. The error sample is shown as below:

SQL> conn howie@dbtest
Enter password:
ERROR:
ORA-28030: Server encountered problems accessing LDAP directory service


Warning: You are no longer connected to ORACLE.

 

Here how I usually troubleshoot this kind of issue. Two examples.

First of all, you need to enable the trace to dump the actual errors in the database:

SQL> alter system set events '28033 trace name context forever, level 9';

 

Sencond, regenerate the error:

SQL> conn howie@dbtest
Enter password:
ERROR:
ORA-28030: Server encountered problems accessing LDAP directory service

 

Third, disable the trace:

SQL> alter system set events '28033 trace name context off';

After checking the trace files, I found errors. This is related to the OID server lnx-ldap DNS configuration. Check /etc/hosts or DNS to make sure the OID server lnx-ldap or the port 3131 is reachable.

KZLD_ERR: failed to open connection to lnx-ldap:3131
KZLD_ERR: 28030
KZLD_ERR: failed from kzldob_open_bind.

Or you may see the error like this, this is because the wallet files were corrupted, you need to recreate the wallet, and make sure the wallet path is defined properly:

kzld_discover received ldaptype: OID
KZLD_ERR: failed to get cred from wallet
KZLD_ERR: Failed to bind to LDAP server. Err=28032
KZLD_ERR: 28032
KZLD is doing LDAP unbind
KZLD_ERR: found err from kzldini.

There are many possibilities to throw out ORA-28030, in this blog I am just simply giving you the hints for identifying the root cause.

Hope it helps!

 

Discover more about our expertise in the world of Oracle.

Log Buffer #452: A Carnival of the Vanities for DBAs

$
0
0

This Log Buffer Edition covers top Oracle, SQL Server and MySQL blog posts of the week.

Oracle:

  • In Oracle EBS 12.0 and 12.1 the Workflow notification system was not enabled to send e-mail notifications to users or roles who happened to have multiple e-mail addresses associated to them.
  • Just how can a SQL Developer user quickly build out a SQL script for a database user that will include ALL of their privileges, roles, and system grants?
  • Oracle BI 12c has been released for some time now. There are a few changes in the way it is installed compared to the previous 11g releases. This post is about installing and configuring OBIEE 12c with detailed step-by-step instructions (Linux x86-64 in this case).
  • In today’s digital economy, customers want effortless engagements and answers to their questions regardless of how they connect with a brand.
  • Upgrade to Oracle Database 12c and Avoid Query Regression.

SQL Server:

  • Continuous integration (CI) is the process of ensuring that all code and related resources in a development project are integrated regularly and tested by an automated build system.
  • SSIS Issues after Master DB Corruption – “Please Recreate Master Key” When Running Package.
  • Check FileSize and LogUsage for all DBs.
  • Other Users Cannot Execute SSIS Packages after migration.
  • How to Get Started Using SQL Server in Azure.

MySQL:

  • Amazon Aurora in sys bench benchmarks.
  • “Data” and “Performance” is where MySQL Cluster’s heart is. In-memory performance and always-up drives our agenda. The Percona Live Data Performance Conference is coming up with two submitted sessions about Cluster.
  • Fixing errant transactions with mysqlslavetrx prior to a GTID failover.
  • MariaDB CONNECT storage engine handles access to JSON files through standard SQL. It comes with a set of UDFs (user defined functions) to manipulate the JSON format. This JSON content can be stored in a normal text column.
  • Become a ClusterControl DBA: Managing your Database Configurations.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Why Locking Oracle Accounts is a Bad Idea

$
0
0

 

Time and time again I run into database accounts, which are marked “LOCKED” or “EXPIRED & LOCKED”. The main problem here lies with how Oracle handles a failed login attempt when the account is locked. In this blog I will discuss why locking Oracle accounts is a bad idea.

Let’s consider the following scenario:

SQL> create user scott identified by tiger account lock;

User created.

SQL> select username, account_status from dba_users where username='SCOTT';

USERNAME                       ACCOUNT_STATUS
------------------------------ --------------------------------
SCOTT                          LOCKED

 

So what happens if I put on my black hat, and try to get into this database? I may probe for some common users, and just happen to come across this:


SQL> connect scott/abc
ERROR:
ORA-28000: the account is locked
Warning: You are no longer connected to ORACLE.
SQL>

 

What Oracle does there is give me a very valuable piece of information: it tells me that this user exists in the database. Why is that important?

Let’s see what we can find out – without even being able to connect, based solely on the account status of some common accounts:

 

USERNAME		       ACCOUNT_STATUS
------------------------------ --------------------------------
ANONYMOUS		       EXPIRED & LOCKED
APEX_030200		       LOCKED
APEX_PUBLIC_USER	       LOCKED
CTXSYS			       EXPIRED & LOCKED
DIP			       EXPIRED & LOCKED
EXFSYS			       EXPIRED & LOCKED
FLOWS_FILES		       LOCKED
OLAPSYS 		       EXPIRED & LOCKED
ORACLE_OCM		       EXPIRED & LOCKED
OUTLN			       EXPIRED & LOCKED
SQLTXADMIN		       EXPIRED & LOCKED
WMSYS			       EXPIRED & LOCKED
XDB			       EXPIRED & LOCKED
XS$NULL 		       EXPIRED & LOCKED

 

Simply by trying to connect to some of these, and Oracle telling me that the account is locked, I now know that the database has all of the following installed:

 

– APEX
– OLAP
– Oracle Text
– XML Database

 

That’s a lot of information I was just given for free. Depending on the components I’d find, I could also deduce that the Oracle JVM is installed in the database. And this frequently hits the news with newly discovered vulnerabilities.

In essence this means that by locking your accounts, you leave the door open way wider than you’re thinking. It’s a totally counter-productive way of doing things.

So what’s better?

The best approach is a very simple one. Putting my white hat back on, I just assign the user an impossible password hash, like so:


alter user scott account unlock identified by values 'impossible';

 

It’s not possible for this user to ever log in while this hash is in place. And if we try, all we get is:


SQL> connect scott/abc
ERROR:
ORA-01017: invalid username/password; logon denied

 

Warning: You are no longer connected to ORACLE.

The second thing you’d want to do is ensure that those users’ passwords never expire. Or you’d end up with the same EXPIRED & LOCKED status again.

Happy unlocking, and stay secure! :)

 

Discover more about our expertise in the world of Oracle.

How to Decipher Oracle Internal Datatype Storage

$
0
0

What started out as an investigation into how the optimizer deals with predicates that are outside the known range of value became something else when I tried to determine just what Oracle believes low and high values of the range to be.

I didn’t expect to have anything to add to the topic, as it has been rather well covered; I just wanted to better understand it by creating a few examples that demonstrate what can happen.

As of yet, I have not yet gotten that far.

One of the first things I wanted to know for this is what Oracle believes the low and high values to be.

These can be seen in both DBA_TAB_COLUMNS and DBA_TAB_COL_STATISTICS in the LOW_VALUE and HIGH_VALUE columns.

The DBA_TAB_COL_STATISTICS view is preferred, as these columns are maintained in DBA_TAB_COLUMNS only for backward compatibility with Oracle 7.


SQL> desc dba_tab_col_statistics
 Name              Null?    Type
 ----------------- -------- ------------------------------------
 OWNER                      VARCHAR2(128)
 TABLE_NAME                 VARCHAR2(128)
 COLUMN_NAME                VARCHAR2(128)
 NUM_DISTINCT               NUMBER
 LOW_VALUE                  RAW(1000)
 HIGH_VALUE                 RAW(1000)
 DENSITY                    NUMBER
 NUM_NULLS                  NUMBER
 NUM_BUCKETS                NUMBER
 LAST_ANALYZED              DATE
 SAMPLE_SIZE                NUMBER
 GLOBAL_STATS               VARCHAR2(3)
 USER_STATS                 VARCHAR2(3)
 NOTES                      VARCHAR2(63)
 AVG_COL_LEN                NUMBER
 HISTOGRAM                  VARCHAR2(15)
 SCOPE                      VARCHAR2(7)

The LOW_VALUE and HIGH_VALUE values are stored as RAW, so they must be using Oracle’s internal storage format for whichever datatype the column consists of.

Oracle does supply conversion routines via the DBMS_STATS package.

These routines are deployed as procedures. As Oracle 12c allows using functions defined in a SQL statement these procedures can be used in queries written for a 12c database.

Using the DBMS_STATS conversion procedure in databases < 12c requires creating functions so that the values may be returned to a SQL statement. While that method will work, it is often not desirable, and may not even be possible, particularly in a production database.

When I say ‘not even be possible’ what I mean is not that it cannot be done, but that doing so is probably not allowed in many databases.

To create a SQL statement that can show the high and low values, it will be necessary to use some other means.

Let’s start off by creating some data to work with.


define chars='ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyz'

create table low_high
as
select id
   , mod(id,128) n1
   , substr('&&chars',mod(id,42)+1, 20) c1
   , sysdate-(mod(id,1000)+1) d1
from (
   select level id from dual
   connect by level <= 128 * 1024
)
/

exec dbms_stats.gather_table_stats(ownname => user, tabname => 'LOW_HIGH', method_opt => 'for all columns size auto')

Now that we have a table, let's take a look a the ranges of values.
 Note: I am using the _TAB_COLUMNS views for some queries just for simplification of the SQL for demonstration.
col low_value format a40
col high_value format a40

prompt
prompt NUMERIC
prompt

select column_name, low_value, high_value
from user_tab_columns
where table_name = 'LOW_HIGH'
   and data_type = 'NUMBER'
/

prompt
prompt VARCHAR2
prompt

select column_name, low_value, high_value
from user_tab_columns
where table_name = 'LOW_HIGH'
   and data_type = 'VARCHAR2'
/

prompt
prompt DATE
prompt

select column_name, low_value, high_value
from user_tab_columns
where table_name = 'LOW_HIGH'
   and data_type = 'DATE'
/

NUMERIC

COLUMN LOW_VALUE                                HIGH_VALUE
------ ---------------------------------------- ----------------------------------------
ID     C102                                     C30E0B49
N1     80                                       C2021C

2 rows selected.

VARCHAR2

COLUMN LOW_VALUE                                HIGH_VALUE
------ ---------------------------------------- ----------------------------------------
C1     303132333435363738396162636465666768696A 666768696A6B6C6D6E6F70717273747576777879

1 row selected.

DATE

COLUMN LOW_VALUE                                HIGH_VALUE
------ ---------------------------------------- ----------------------------------------
D1     7871030D121C04                           78730C07121C04

1 row selected.

Clearly the values being stored for LOW_VALUE and HIGH_VALUE are of little use to us in their current format.

What can we do?

For the NUMBER and character data types (VARCHAR2, VARCHAR, CHAR) the package UTL_RAW can be used to get the actual values.

Here is an example of converting some of these to a human readable format.


col low_value format 999999999999
col high_value format 999999999999

select  column_name
   , utl_raw.cast_to_number(low_value) low_value
   , utl_raw.cast_to_number(high_value) high_value
from user_tab_columns
where table_name = 'LOW_HIGH'
   and data_type = 'NUMBER'
/

col low_value format a20
col high_value format a20

select  column_name
   , utl_raw.cast_to_varchar2(low_value) low_value
   , utl_raw.cast_to_varchar2(high_value) high_value
from user_tab_columns
where table_name = 'LOW_HIGH'
   and data_type = 'VARCHAR2'
/

COLUMN                             LOW_VALUE    HIGH_VALUE
------------------------------ ------------- -------------
N1                                         0           127
ID                                         1        131072

2 rows selected.

COLUMN                         LOW_VALUE            HIGH_VALUE
------------------------------ -------------------- --------------------
C1                             0123456789abcdefghij fghijklmnopqrstuvwxy

1 row selected.

These values can be verified, as shown here with the N1 column:

SQL> select min(n1), max(n1) from low_high;

 MIN(N1) MAX(N1)
---------- ----------
 0 127

1 row selected.

So far I have done this only with these simple versions of these data types.
Variations such as NVARCHAR2, BINARY_FLOAT and others may require different handling.

What is missing? The DATE column has not yet been handled.

Converting the raw date format to a readable date is not so straightforward as there does not seem to be any conversion function available for that (If you know of one, please write about it in the comments section of this article).

 

Oracle DATE Format

First it will be necessary to know how Oracle stores a date in the database. Oracle’s internal date format has been documented a number of times and is well known, such as in the following Oracle Support Note:

How does Oracle store the DATE datatype internally? (Doc ID 69028.1)

Oracle dates consist of seven parts: century, year, month of the year, day of the month, and the hours, minutes and seconds after midnight.

The internal representation of this format can be seen by running the script in Example 1.

 

Example 1: dumping the internal date format

alter session set nls_date_format = 'mm/dd/yyyy hh24:mi:ss';
col today format a40
drop table t1;

create table t1
as select sysdate today
from dual;

select to_char(today) today
from t1
union
select dump(today) today
from t1;

TODAY
----------------------------------------
12/09/2015 13:13:57
Typ=12 Len=7: 120,115,12,9,14,14,58

2 rows selected.

The hour, minute and second are all stored in excess-1 notation, so 1 must be subtracted from them to get the correct time. Using excess-1 notation prevents a zero byte from being stored.

The month and day are both stored with the actual value, which can be seen in the SELECT output.

The values for the century and year are stored in excess-100 notation.

This means that 100 must be subtracted from the value before using it.

In the case of the date in Example 1 the year is clearly seen by subtracting 100 from 104.
The century is somewhat different. Not only must 100 be subtracted from the value, it must then be multiplied by 100.

The following example demontrates how the components of a date can be extracted from the information returned by the dump() function.

col cyear format 9999
col month format a2
col day format a2
col hour format 99
col minute format 99
col second format 99

select
        -- extract the century and year information from the
        -- internal date format
        -- century = (century byte -100) * 100
        (
                to_number(
                        -- parse out integer appearing before first comma
                        substr( startup_dump, 1, instr(startup_dump,',')-1) - 100
                ) * 100
        )
        +
        -- year = year byte - 100
        (
                to_number(
                        substr(
                                startup_dump,
                                -- get position of 2nd comma
                                instr(startup_dump,',',2)+1,
                                -- get position of 2nd comma - position of 1st comma
                                instr(startup_dump,',',1,2) - instr(startup_dump,',',1,1) -1
                        )
                )
                - 100
        ) cyear
         , substr(
            startup_dump,
            instr(startup_dump,',',1,2)+1,
            instr(startup_dump,',',1,3) - instr(startup_dump,',',1,2) -1
         ) month
         , substr(
            startup_dump,
            instr(startup_dump,',',1,3)+1,
            instr(startup_dump,',',1,4) - instr(startup_dump,',',1,3) -1
         ) day
         , to_number(substr(
            startup_dump,
            instr(startup_dump,',',1,4)+1,
            instr(startup_dump,',',1,5) - instr(startup_dump,',',1,4) -1
         ))-1 hour
         , to_number(substr(
            startup_dump,
            instr(startup_dump,',',1,5)+1,
            instr(startup_dump,',',1,6) - instr(startup_dump,',',1,5) -1
         ))-1 minute
         , to_number(substr(
            startup_dump,
            instr(startup_dump,',',1,6)+1
         ))-1 second
from (
        -- return just the date bytes from the dump()
        select substr(dump(startup_time),15) startup_dump
        from v$instance
) a

SQL> /

CYEAR MO DA HOUR MINUTE SECOND
----- -- -- ---- ------ ------
 2015 11 18   17     33     32

1 row selected.

Note: the internal format for SYSDATE is not the same as dates stored in a table.
This is also true for TIMESTAMP and SYSTIMESTAMP.

The internal format for TIMESTAMP columns can be seen in this OraFaq Article.

 

Putting it All Together

So, now we can make use of this to examine the values Oracle stores to bind the ranges of columns, this time including the DATE columns.

col low_value format a20
col high_value format a20
col table_name format a10 head 'TABLE'
col data_type format a20
col column_name format a6 head 'COLUMN'

set linesize 200 trimspool on
set pagesize 60

select
   us.table_name,
   uc.data_type,
   us.column_name,
   case
      when uc.data_type in ('VARCHAR2','VARCHAR','CHAR')  then
         utl_raw.cast_to_varchar2(us.low_value)
      when uc.data_type = 'NUMBER' then
         to_char(utl_raw.cast_to_number(us.low_value) )
      when uc.data_type = 'DATE' then
         -- extract the century and year information from the
         -- internal date format
         -- century = (century byte -100) * 100
         to_char((
            to_number(
                  -- parse out integer appearing before first comma
                  substr( substr(dump(us.low_value),15), 1, instr(substr(dump(us.low_value),15),',')-1) - 100
            ) * 100
         )
         +
         -- year = year byte - 100
         (
            to_number(
                  substr(
                     substr(dump(us.low_value),15),
                     -- get position of 2nd comma
                     instr(substr(dump(us.low_value),15),',',2)+1,
                     -- get position of 2nd comma - position of 1st comma
                     instr(substr(dump(us.low_value),15),',',1,2) - instr(substr(dump(us.low_value),15),',',1,1) -1
                  )
            )
            - 100
         )) --current_year
                  || '-' ||
                  lpad(
                     substr(
                        substr(dump(us.low_value),15),
                        instr(substr(dump(us.low_value),15),',',1,2)+1,
                        instr(substr(dump(us.low_value),15),',',1,3) - instr(substr(dump(us.low_value),15),',',1,2) -1
                     ) -- month
                     ,2,'0'
                  )
                  ||  '-' ||
                  lpad(
                     substr(
                        substr(dump(us.low_value),15),
                        instr(substr(dump(us.low_value),15),',',1,3)+1,
                        instr(substr(dump(us.low_value),15),',',1,4) - instr(substr(dump(us.low_value),15),',',1,3) -1
                     ) -- day
                     ,2,'0'
                  )
                  || ' ' ||
                  lpad(
                     to_char(to_number(
                        substr(
                              substr(dump(us.low_value),15),
                              instr(substr(dump(us.low_value),15),',',1,4)+1,
                              instr(substr(dump(us.low_value),15),',',1,5) - instr(substr(dump(us.low_value),15),',',1,4) -1
                        )
                     )-1)
                     ,2,'0'
                  ) -- hour
                  || ':' ||
                  lpad(
                     to_char(
                        to_number(
                              substr(
                              substr(dump(us.low_value),15),
                              instr(substr(dump(us.low_value),15),',',1,5)+1,
                              instr(substr(dump(us.low_value),15),',',1,6) - instr(substr(dump(us.low_value),15),',',1,5) -1
                              )
                        )-1
                     )
                     ,2,'0'
                  ) -- minute
                  || ':' ||
                  lpad(
                     to_char(
                        to_number(
                              substr(
                              substr(dump(us.low_value),15),
                              instr(substr(dump(us.low_value),15),',',1,6)+1
                              )
                        )-1
                     )
                     ,2,'0'
                  ) --second
         else 'NOT SUPPORTED'
         end low_value,
         -- get the high value
   case
      when uc.data_type in ('VARCHAR2','VARCHAR','CHAR')  then
         utl_raw.cast_to_varchar2(us.high_value)
      when uc.data_type = 'NUMBER' then
         to_char(utl_raw.cast_to_number(us.high_value) )
      when uc.data_type = 'DATE' then
         -- extract the century and year information from the
         -- internal date format
         -- century = (century byte -100) * 100
         to_char((
            to_number(
                  -- parse out integer appearing before first comma
                  substr( substr(dump(us.high_value),15), 1, instr(substr(dump(us.high_value),15),',')-1) - 100
            ) * 100
         )
         +
         -- year = year byte - 100
         (
            to_number(
                  substr(
                     substr(dump(us.high_value),15),
                     -- get position of 2nd comma
                     instr(substr(dump(us.high_value),15),',',2)+1,
                     -- get position of 2nd comma - position of 1st comma
                     instr(substr(dump(us.high_value),15),',',1,2) - instr(substr(dump(us.high_value),15),',',1,1) -1
                  )
            )
            - 100
         )) --current_year
                  || '-' ||
                  lpad(
                     substr(
                        substr(dump(us.high_value),15),
                        instr(substr(dump(us.high_value),15),',',1,2)+1,
                        instr(substr(dump(us.high_value),15),',',1,3) - instr(substr(dump(us.high_value),15),',',1,2) -1
                     ) -- month
                     ,2,'0'
                  )
                  ||  '-' ||
                  lpad(
                     substr(
                        substr(dump(us.high_value),15),
                        instr(substr(dump(us.high_value),15),',',1,3)+1,
                        instr(substr(dump(us.high_value),15),',',1,4) - instr(substr(dump(us.high_value),15),',',1,3) -1
                     ) -- day
                     ,2,'0'
                  )
                  || ' ' ||
                  lpad(
                     to_char(to_number(
                        substr(
                              substr(dump(us.high_value),15),
                              instr(substr(dump(us.high_value),15),',',1,4)+1,
                              instr(substr(dump(us.high_value),15),',',1,5) - instr(substr(dump(us.high_value),15),',',1,4) -1
                        )
                     )-1)
                     ,2,'0'
                  ) -- hour
                  || ':' ||
                  lpad(
                     to_char(
                        to_number(
                              substr(
                              substr(dump(us.high_value),15),
                              instr(substr(dump(us.high_value),15),',',1,5)+1,
                              instr(substr(dump(us.high_value),15),',',1,6) - instr(substr(dump(us.high_value),15),',',1,5) -1
                              )
                        )-1
                     )
                     ,2,'0'
                  ) -- minute
                  || ':' ||
                  lpad(
                     to_char(
                        to_number(
                              substr(
                              substr(dump(us.high_value),15),
                              instr(substr(dump(us.high_value),15),',',1,6)+1
                              )
                        )-1
                     )
                     ,2,'0'
                  ) --second
         else 'NOT SUPPORTED'
         end high_value
from all_tab_col_statistics us
join all_tab_columns uc on uc.owner = us.owner
   and uc.table_name = us.table_name
   and uc.column_name = us.column_name
   and us.owner = USER
   and us.table_name = 'LOW_HIGH'
order by uc.column_id

SQL&amp;gt; /

TABLE      DATA_TYPE            COLUMN LOW_VALUE            HIGH_VALUE
---------- -------------------- ------ -------------------- --------------------
LOW_HIGH   NUMBER               ID     1                    131072
LOW_HIGH   NUMBER               N1     0                    127
LOW_HIGH   VARCHAR2             C1     0123456789abcdefghij fghijklmnopqrstuvwxy
LOW_HIGH   DATE                 D1     2013-03-13 17:27:03  2015-12-07 17:27:03

4 rows selected.

Verify the D1 column values

SQL>  select min(d1) min_d1, max(d1) max_d1 from low_high;

MIN_D1              MAX_D1
------------------- -------------------
2013-03-13 17:27:03 2015-12-07 17:27:03

1 row selected.

And there you have it. We can now see in human readable form the low and high values that Oracle has stored for each column. While it is a rather complex SQL statement, it really is not difficult to understand once you know the purpose behind. And the beauty of this script is that no functions or procedures need to be created to make use of it.

If you would like add TIMESTAMP or any other value to the script, please, do so!
The SQL can be found here in the Low-High GitHub repo.

Now that the values can be viewed, the next task will be to put the script to use by using some examples to see how Oracle handles predicates outside the known range of values.

 

Log Buffer #453: A Carnival of the Vanities for DBAs

$
0
0

This week, the Log Buffer Edition digs deep into the world of Oracle, SQL Server and MySQL and brings you some of the best blog posts around.

Oracle:

  • Regardless of what type of industry or business you are involved in, the bottom-line goal is to optimize sales; and that involves replacing any archaic tech processes with cutting-edge technology and substituting any existing chaos with results-driven clarity.
  • Oracle Private Cloud Appliance 2.1.1 Released.
  • Every version of the optimizer enhances existing mechanisms and introduces new features, while 12c has introduced some of the most sophisticated transformation to date.
  • PLSQL, syslog and the story of Bolas spiders.
  • Here is why you need to be super careful when using LOB’s within triggers.

SQL Server:

  • Phil Factor talks about late in the day for a DBA.
  • This article details SMKs, DMKs and certificates in SQL Server as they relate to Transparent Data Encryption and Encrypted Backups.
  • In traditional relational schema there can be a lot of one-to-many relationships (e.g. Person may have several phones, or several email addresses).
  • Building Apps for Windows 10 with Visual Studio 2015.
  • The GA of System Center Configuration Manager 1511.

MySQL:

  • rows_examined_per_scan, rows_produced_per_join: EXPLAIN FORMAT=JSON answers on question “What number of filtered rows mean?”.
  • Secure communications is a core component of a robust security policy, and MySQL Server 5.7.10 – the first maintenance release of MySQL Server 5.7 – introduces needed improvements in this area.
  • MariaDB 5.5.47 and updated connectors now available.
  • Google Cloud SQL is a fully managed database service that makes it easy to set-up, maintain, manage, and administer your relational MySQL databases in the cloud. Cloud SQL allows you to focus on your applications rather than administering your databases.
  • A long time ago, libmysqlclient came in two versions: one that was thread safe and one that wasn’t. But that was a long time ago. Since MySQL 5.5, the thread safe libmysqlclient_r library has just been a symlink to the libmysqlclient library, which has been thread safe at least since then.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.


ASM Internals: Tracking Down Failed ASM Reads

$
0
0

On a live customer system, we’ve encountered repeated incidents of errors such as the following:

WARNING: Read Failed. group:1 disk:3 AU:86753 offset:524288 size:262144

Since Oracle doesn’t tell us what exactly is failing here, some research was in order. There’s a few posts out there about mapping ASM allocation units (AU) to database extents. But I felt that some of them weren’t entirely clear on what is being done, how and why. This prompted me to do some digging of my own.

This is our starting point. We know that:

  • The error happened on RAC instance 1 (since it was logged in the alert log of said instance).
  • The ASM disk group number is 1.
  • The ASM disk number is 3.
  • The AU number is 86753.
  • We can’t read that AU.
  • Database version is 11.2.0.4 on Linux.
  • ASM disk group redundancy is external.

We can further tell, that the failed read was at byte offset 524288 (which is 512KB) into the AU, and it was a multi-block read of 32 blocks (262144 / 8192). Thus it was likely a full table scan.

Disclaimer: what follows next is undocumented, and the usual disclaimers apply: check with Oracle support before running any of this against your production system.

In an ASM instance, Oracle exposes the ASM AU map in the fixed table X$KFFXP. We can query that to get some additional details, using the information we already have:

select inst_id, group_kffxp, number_kffxp, pxn_kffxp  
  from x$kffxp 
 where group_kffxp=1 
   and disk_kffxp=3 
   and au_kffxp=86753;

   INST_ID GROUP_KFFXP NUMBER_KFFXP  PXN_KFFXP
---------- ----------- ------------ ----------
         1           1          287       5526

 


Note: you have to run this in an ASM instance. On a database instance, the table doesn’t contain any rows (on the current version that I tested this on).

The columns in this table aren’t officially documented, but my own testing confirms that the information that can be found on google is fairly reliable in the current versions. What we used here is:

  • GROUP_KFFXP – the ASM disk group number, 1 in our case.
  • DISK_KFFXP – the ASM disk number, 3.
  • AU_KFFXP – the AU number, 86753.

The view now tells us the first two pieces of the puzzle that we need to know:

  • NUMBER_KFFXP – the ASM file number (not to be confused with the Oracle data file number).
  • PXN_KFFXP – the physical extent number in that file.

Armed with this information, we can now determine the details of the file that’s experiencing read errors:

set lines 200 pages 999
col dg for a12
col name for a20
col fname for a40
select t.name,
       substr(f.name, instr(f.name,'/',-1) + 1) as fname, 
       a.file_number, f.file# as DBFILE#,
       f.bytes/1024/1024 as file_mb
  from v$datafile f, v$tablespace t, v$asm_diskgroup g, 
       v$asm_alias a, v$asm_file af
 where g.name(+) = substr(f.name,2,instr(f.name,'/')-2)
   and a.name(+) = upper(substr(f.name, instr(f.name,'/',-1) + 1))
   and a.file_number = af.file_number
   and a.group_number = af.group_number
   and f.ts# = t.ts#
   and af.file_number = 287
/

NAME                 FNAME               FILE_NUMBER DBFILE#    FILE_MB
-------------------- ------------------- ----------- ---------- ----------
USERS                users.287.795706011         287          4      11895

We can see that the file is a part of the USERS table-space, and has a data file ID of 4.

Let’s double check our environment:

select allocation_unit_size from v$asm_diskgroup where group_number=1;

ALLOCATION_UNIT_SIZE
--------------------
             1048576

select block_size from dba_tablespaces where tablespace_name='USERS';

BLOCK_SIZE
----------
      8192

Now we have all that we need to get the final piece of our puzzle. We can use the following formula to calculate the position of the extent in the file, and from there, hit DBA_EXTENTS to see what that is.


[ AU_SIZE ] * [ PXN ] / [ BLOCK_SIZE ]

In our case, that becomes the following query:

select owner, segment_name, segment_type 
  from dba_extents 
 where file_id = 4 
   and 1048576 * 5526 / 8192 between block_id and block_id + blocks -1;

OWNER                          SEGMENT_NAME                    SEGMENT_TYPE
------------------------------ ------------------------------- ------------------
FOO                            CUSTOMER_OLD                    TABLE

We can also confirm that our result is correct by attempting to read it (note we are forcing a full scan to make sure we’re actually reading the table segment):

select /*+ full(t) */ count(*) from FOO.CUSTOMER_OLD t
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 741
Session ID: 695 Serial number: 40797

And sure enough, we see our familiar error message in the alert log instantly:

Thu Dec 10 04:24:23 2015
WARNING: Read Failed. group:1 disk:3 AU:86753 offset:524288 size:262144
ERROR: unrecoverable error ORA-15188 raised in ASM I/O path; terminating process 741 

We found the affected segment, and can now proceed with the usual recovery scenarios that are available to us. In this particular case, the table can likely be dropped as it was a backup.

Nonetheless, it is quite clear that the underlying disk (disk number 3 in group number 1) is faulty and must be replaced. There is one more thing, though, that we need to be mindful of. In order to replace the disk, Oracle has to be able to read all the allocated AUs on that disk as part of the re-balance process that is triggered when dropping/adding disks.

How do we tell if there aren’t any other segments that can’t be read? We’d have to be able to retrieve a list of all extents that are located on the disk in question. Of course, we can simply go for it, and let the drop/re-balance operation fail, which would also tell us that there are additional areas with problems on that disk. Since this is production, I prefer to be in the know instead of running something blindly. Additionally, you may hit one error during the re-balance, correct that, re-try and then hit another one. Rinse and repeat. Doesn’t sound too comforting, does it? So let’s see how we can get that information together.

There is but one problem we need to solve first. The data that we need is not available in the same place:

  1. X$KFFXP is only available on an ASM instance.
  2. DBA_EXTENTS is only available on a database instance.

I opted to go for the external table approach, and pull the data out of ASM first by creating the file /tmp/asm_map.sql with these contents:

set echo off
set feedback off
set termout off
set pages 0
spool /tmp/asm_map.txt
select x.number_kffxp || ',' || x.pxn_kffxp as data
  from x$kffxp x
 where x.group_kffxp=1
   and x.disk_kffxp=3
   and x.number_kffxp &gt; 255
/
spool off

Again, we are specifying the group number from our error message (GROUP_KFFXP=1) and the problematic disk (DISK_KFFXP=3).

Execute that script while connected to your ASM instance. Beware, if you have huge LUNs, this may write a lot of data. You may want to relocate the file to an alternate location. Again, please verify with Oracle support before running this against your production database, as with anything that involves underscore parameters, or x$ tables.

Next, switch to the database instance, and run the following:

create directory tmp_asm as '/tmp'
/
create table asm_map
(
  asm_file number, 
  asm_pxn number
)
organization external 
(
  type oracle_loader 
  default directory tmp_asm
  access parameters
  (
    records delimited by newline 
    fields terminated by ','
  )
  location ( 'asm_map.txt' )
)
/

Ensure that the data is properly readable:

select * from asm_map where rownum &lt; 10
/
  ASM_FILE    ASM_PXN
---------- ----------
       256          4
       256          7
       256         21
       256         28
       256         35
       256         49
       256         52
       256         75
       256         88

9 rows selected.

Now we can join into v$datafile and dba_extents to get the actual data we’re after. Let’s first build a list of table spaces and data files that are stored on this disk:

col ts_name for a30
col fname for a50
set lines 200 pages 999
select unique t.name as TS_NAME,
       substr(f.name, instr(f.name,'/',-1) + 1) as fname, a.file_number, f.file# as DBFILE#
  from v$datafile f, v$tablespace t, v$asm_diskgroup g, v$asm_alias a, v$asm_file af, ( select distinct asm_file from asm_map ) m
 where g.name(+) = substr(f.name,2,instr(f.name,'/')-2)
   and a.name(+) = upper(substr(f.name, instr(f.name,'/',-1) + 1))
   and a.file_number = af.file_number
   and a.group_number = af.group_number
   and f.ts# = t.ts#
   and af.file_number = m.asm_file
 order by 1,2
/

Now let’s expand that to also include dba_extents. I am creating a copy of the contents of dba_extents, which is known to often not perform in an optimal fashion, particularly on large databases. Otherwise the query may take an extremely long time. This extra step is particularly helpful and yields more benefit if you want to repeatedly query the data in dba_extents, which an exercise like this is a good example of.

create table tmp_extents
tablespace users
as 
select * from dba_extents
/

And now we’re ready to get the list of all segments that would be affected by problems on this one disk. This query gives us a list of everything stored on that disk:

col ts_name for a30
col obj for a100
set lines 200 pages 999
col segment_type for a18
set lines 200 pages 999
select unique t.name as TS_NAME, e.owner || '.' || e.segment_name as obj, e.segment_type,
       a.file_number, f.file# as DBFILE#
  from v$datafile f, v$tablespace t, v$asm_diskgroup g, v$asm_alias a, v$asm_file af, asm_map m, tmp_extents e
 where g.name(+) = substr(f.name,2,instr(f.name,'/')-2)
   and a.name(+) = upper(substr(f.name, instr(f.name,'/',-1) + 1))
   and a.file_number = af.file_number
   and a.group_number = af.group_number
   and f.ts# = t.ts#
   and af.file_number = m.asm_file
   and f.file# = e.file_id
   and t.name = e.tablespace_name
   and g.allocation_unit_size * m.asm_pxn / f.block_size between e.block_id and e.block_id + e.blocks -1
 order by 1,3,2
/

Now with this information, you can proceed to verify if any other segments exist which are unreadable and located on defective sectors:

  • Tables can be full scanned.
  • Indexes can either be rebuilt online, or also read with a fast full scan plan.
  • Lobs can be read with a small PL/SQL block.
  • Clusters should be okay as well if the contained tables are scanned. as that will read the respective blocks.
  • Partitioned tables and indexes can be treated analogous to their non-partitioned counterparts.
  • If undo segments are affected and can’t be read, you may want to involve Oracle support at this point.

By doing that, you can ensure that any potential problems can be detected before the applications or end users are affected by it, and if you don’t detect any other problems, you can feel fairly safe when swapping out the disk that you won’t be hit by any unexpected errors.

Once we are done, let’s not forget to clean up:

drop table tmp_extents
/
drop table asm_map
/
drop directory tmp_asm
/

Discover more about our expertise in the world of Oracle.

Is Oracle Smart Flash Cache a “SPOF”?

$
0
0

 

Oracle Smart Flash Cache (OSFC) is a nice feature that was introduced in Oracle 11g Release 2. As only recently I had a real use case for it, I looked into it with the main goal of determining if adding this additional caching layer would not introduce a new Single Point Of Failure (SPOF). This was a concern, because the solid-state cards/disks used for caching would normally have no redundancy to maximize the available space, and I couldn’t find what happens if any of the devices fail by looking in the documentation or My Oracle Support, so my decision was to test it!
The idea behind the OSFC is to provide a second level of “buffer cache” on solid-state devices that would have better response times compared to re-reading data blocks from spinning disks. When buffer cache runs out of space clean blocks (not “dirty”) would be evicted from it and written to the OSFC. The dirty blocks would be written by DBWR to the data files first, and only then would be copied to OSFC and evicted from the buffer cache. You can read more about what it is, how it works and how to configure OSFC in Oracle Database Administrator’s Guide for 11.2 and 12.1 and in this Oracle white paper “Oracle Database Smart Flash Cache“.

In my case the OSFC was considered for a database running on an Amazon AWS EC2 instance. We used EBS volumes for ASM disks for data files, and as EBS volumes are basically attached by networks behind the scenes, we wanted to remove that little bit of I/O latency by using the instance store (ephemeral SSDs) for the Smart Flash Cache. The additional benefit from using this would be reduction of IOPS done on the EBS volumes, and that’s a big deal, as it’s not that difficult to reach the IOPS thresholds on EBS volumes.

 

Configuration

I did the testing on my VirtualBox VM, which ran Oracle Linux 7.2 and Oracle Database 12.1.0.2 EE. In my case I simply added another VirtualBox disk, that I used for OSFC (reminder, not looking for performance testing here). The device was presented to the database via a separate ASM disk group named “FLASH”. Enabling the OCFS was done by setting the following parameters in the parameter file:

  • db_flash_cache_file=’+FLASH/flash.dat’
  • db_flash_cache_size=’8G’

The 1st surprise came when I bounced the database to enable the new settings, the DB didn’t start and an error was presented “ORA-00439: feature not enabled: Server Flash Cache”. Luckily, I found a known issue in a MOS note “Database Startup Failing With ORA-00439 After Enabling Flash Cache (Doc ID 1550735.1)”, and after forcefully installing two RPMs from OL5 (enterprise-release and redhat-release-5Server), the database came up.

 

Testing

The test I chose was a really simple. These are the preparation steps I did:

  • Reduced the buffer cache of the DB to approximately 700Mb.
  • Created table T1 of size ~1598Mb.
  • Set parameter _serial_direct_read=NEVER (to avoid direct path reads when scanning large tables. I really want to cache everything this time).

The next step was Full-scanning the table by running “select count(*) from T1”, and as I was also tracing the operation to see what was happening:

    • During the 1st execution I observed the following wait events (all multi-block reads from data files, as expected), however, I new the buffer cache was too small to fit all blocks, so a large volume of the blocks would end up in OSFC when they were flushed out from the buffer cache:
      WAIT #140182517664832: nam='db file scattered read' ela= 6057 file#=10 block#=90244 blocks=128 obj#=92736 tim=19152107066
      WAIT #140182517664832: nam='db file scattered read' ela= 4674 file#=10 block#=90372 blocks=128 obj#=92736 tim=19152113919
      WAIT #140182517664832: nam='db file scattered read' ela= 5486 file#=10 block#=90500 blocks=128 obj#=92736 tim=19152121510
      WAIT #140182517664832: nam='db file scattered read' ela= 4888 file#=10 block#=90628 blocks=128 obj#=92736 tim=19152129096
      WAIT #140182517664832: nam='db file scattered read' ela= 3754 file#=10 block#=90756 blocks=128 obj#=92736 tim=19152133997
      WAIT #140182517664832: nam='db file scattered read' ela= 8515 file#=10 block#=90884 blocks=124 obj#=92736 tim=19152143891
      WAIT #140182517664832: nam='db file scattered read' ela= 7177 file#=10 block#=91012 blocks=128 obj#=92736 tim=19152152344
      WAIT #140182517664832: nam='db file scattered read' ela= 6173 file#=10 block#=91140 blocks=128 obj#=92736 tim=19152161837
      
    • The 2nd execution of the query confirmed the reads from the OSFC:
      WAIT #140182517664832: nam='db flash cache single block physical read' ela= 989 p1=0 p2=0 p3=0 obj#=92736 tim=19288463835
      WAIT #140182517664832: nam='db file scattered read' ela= 931 file#=10 block#=176987 blocks=3 obj#=92736 tim=19288465203
      WAIT #140182517664832: nam='db flash cache single block physical read' ela= 589 p1=0 p2=0 p3=0 obj#=92736 tim=19288466044
      WAIT #140182517664832: nam='db file scattered read' ela= 2895 file#=10 block#=176991 blocks=3 obj#=92736 tim=19288469577
      WAIT #140182517664832: nam='db flash cache single block physical read' ela= 1582 p1=0 p2=0 p3=0 obj#=92736 tim=19288471506
      WAIT #140182517664832: nam='db file scattered read' ela= 1877 file#=10 block#=176995 blocks=3 obj#=92736 tim=19288473665
      WAIT #140182517664832: nam='db flash cache single block physical read' ela= 687 p1=0 p2=0 p3=0 obj#=92736 tim=19288474615
      

 

Crashing it?

Once the OSFC was in use I decided to “pull out the SSD” by removing the device /dev/asm-disk03-flash that I created using udev rules and that the FLASH disk group consisted of.
Once I did it, nothing happened, so I executed the query against the T1 table again, as it would access the data in OSFC. This is what I saw:

    1. The query didn’t fail, it completed normally. The OSFC was not used, and the query transparently fell back to the normal disk IOs.
    2. I/O errors for the removed disk were logged in the alert log, followed by messages about disabling of the Flash Cache. It didn’t crash the instance!
      Tue Dec 15 17:07:49 2015
      Errors in file /u01/app/oracle/diag/rdbms/lab12c/LAB12c/trace/LAB12c_ora_24987.trc:
      ORA-15025: could not open disk "/dev/asm-disk03-flash"
      ORA-27041: unable to open file
      Linux-x86_64 Error: 2: No such file or directory
      Additional information: 3
      Tue Dec 15 17:07:49 2015
      WARNING: Read Failed. group:2 disk:0 AU:8243 offset:1040384 size:8192
      path:Unknown disk
               incarnation:0x0 synchronous result:'I/O error'
               subsys:Unknown library krq:0x7f7ec93eaac8 bufp:0x8a366000 osderr1:0x0 osderr2:0x0
               IO elapsed time: 0 usec Time waited on I/O: 0 usec
      WARNING: failed to read mirror side 1 of virtual extent 8191 logical extent 0 of file 256 in group [2.3848896167] from disk FLASH_0000  allocation unit 8243 reason error; if possible, will try another mirror side
      Tue Dec 15 17:07:49 2015
      Errors in file /u01/app/oracle/diag/rdbms/lab12c/LAB12c/trace/LAB12c_ora_24987.trc:
      ORA-15025: could not open disk "/dev/asm-disk03-flash"
      ORA-27041: unable to open file
      Linux-x86_64 Error: 2: No such file or directory
      Additional information: 3
      ORA-15081: failed to submit an I/O operation to a disk
      WARNING: Read Failed. group:2 disk:0 AU:8243 offset:1040384 size:8192
      path:Unknown disk
               incarnation:0x0 synchronous result:'I/O error'
               subsys:Unknown library krq:0x7f7ec93eaac8 bufp:0x8a366000 osderr1:0x0 osderr2:0x0
               IO elapsed time: 0 usec Time waited on I/O: 0 usec
      WARNING: failed to read mirror side 1 of virtual extent 8191 logical extent 0 of file 256 in group [2.3848896167] from disk FLASH_0000  allocation unit 8243 reason error; if possible, will try another mirror side
      Tue Dec 15 17:07:49 2015
      Errors in file /u01/app/oracle/diag/rdbms/lab12c/LAB12c/trace/LAB12c_ora_24987.trc:
      ORA-15025: could not open disk "/dev/asm-disk03-flash"
      ORA-27041: unable to open file
      Linux-x86_64 Error: 2: No such file or directory
      Additional information: 3
      ORA-15081: failed to submit an I/O operation to a disk
      ORA-15081: failed to submit an I/O operation to a disk
      WARNING: Read Failed. group:2 disk:0 AU:8243 offset:1040384 size:8192
      path:Unknown disk
               incarnation:0x0 synchronous result:'I/O error'
               subsys:Unknown library krq:0x7f7ec93eaac8 bufp:0x8a366000 osderr1:0x0 osderr2:0x0
               IO elapsed time: 0 usec Time waited on I/O: 0 usec
      WARNING: failed to read mirror side 1 of virtual extent 8191 logical extent 0 of file 256 in group [2.3848896167] from disk FLASH_0000  allocation unit 8243 reason error; if possible, will try another mirror side
      Tue Dec 15 17:07:49 2015
      Errors in file /u01/app/oracle/diag/rdbms/lab12c/LAB12c/trace/LAB12c_ora_24987.trc:
      ORA-15081: failed to submit an I/O operation to a disk
      ORA-15081: failed to submit an I/O operation to a disk
      ORA-15081: failed to submit an I/O operation to a disk
      Encounter unknown issue while accessing Flash Cache. Potentially a hardware issue
      Flash Cache: disabling started for file
      0
      
      Flash cache: future write-issues disabled
      Start disabling flash cache writes..
      Tue Dec 15 17:07:49 2015
      Flash cache: DBW0 stopping flash writes...
      Flash cache: DBW0 garbage-collecting for issued writes..
      Flash cache: DBW0 invalidating existing flash buffers..
      Flash cache: DBW0 done with write disabling. Checking other DBWs..
      Flash Cache file +FLASH/flash.dat (3, 0) closed by dbwr 0
      

     

    Re-enabling the OSFC

    Once the OSFC was automatically disabled I wanted to know if it can be re-enabled without bouncing the database. I added back the missing ASM disk, but it didn’t trigger the re-enabling of the OSFC automatically.
    I had to set the db_flash_cache_size=’8G’ parameter again, and then the cache was re-enabled, which was also confirmed by a message in the alert log:

    Tue Dec 15 17:09:46 2015
    Dynamically re-enabling db_flash_cache_file 0
    Tue Dec 15 17:09:46 2015
    ALTER SYSTEM SET db_flash_cache_size=8G SCOPE=MEMORY;
    

    Conclusions

    Good news! It appears to be safe (and also logical) to configure Oracle Smart Flash Cache on non-redundant solid-state devices, as their failures don’t affect the availability of the database. However, you may experience a performance impact at the time the OSFC is disabled. I did the testing on 12.1.0.2 only, so this may behave differently in order versions.

     

    Discover more about our expertise in the world of Oracle.

    Log Buffer #455: A Carnival of the Vanities for DBAs

    $
    0
    0

    What better to do during the holiday season than to read the Log Buffer? This log buffer edition is here to add some sparkle to Oracle, MySQL and SQL Server on your days off.

    Oracle:

    • Ops Center version 12.3.1 has just been released. There are a number of enhancements here.
    • Oracle R Enterprise (ORE) 1.5 is now available for download on all supported platforms with Oracle R Distribution 3.2.0 / R-3.2.0. ORE 1.5 introduces parallel distributed implementations of Random Forest, Singular Value Decomposition (SVD), and Principal Component Analysis (PCA) that operate on ore.frame objects.
    • Create a SOA Application in JDeveloper 12c Using Maven SOA Plug-In by Daniel Rodriguez.
    • How reliable are the memory advisors?
    • Oracle Enterprise Manager offers a complete cloud solution including self-service provisioning balanced against centralized, policy-based resource management, integrated chargeback and capacity planning and complete visibility of the physical and virtual environments from applications to disk.

    SQL Server:

    • SQL Server Data Tools (SSDT) and Database References.
    • Stairway to SQL Server Extended Events Level 1: From SQL Trace to Extended Events.
    • Advanced Mathematical Formulas using the M Language.
    • Liberating the DBA from SQL Authentication with AD Groups.
    • Enterprise Edition customers enjoy the manageability and performance benefits offered by table partitioning, but this feature is not available in Standard Edition.

    MySQL:

    • Is MySQL X faster than MySQL Y? – Ask query profiler.
    • Usually when one says “SSL” or “TLS” it means not a specific protocol but a family of protocols.
    • The MariaDB project is pleased to announce the immediate availability of MariaDB 10.1.10, MariaDB Galera Cluster 5.5.47, and MariaDB Galera Cluster 10.0.23.
    • EXPLAIN FORMAT=JSON: everything about attached_subqueries, optimized_away_subqueries, materialized_from_subquery.
    • Use MySQL to store data from Amazon’s API via Perl scripts.

     

    Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

    Oracle RAC Scalability: Slides from Oracle OpenWorld 2015

    $
    0
    0

    Finally getting some time to post the slides from Open World 2015. This is a presentation about Oracle RAC Scalability. The presentation talks about the type of challenges on how to scale an application horizontally via Oracle RAC. Generally speaking – it’s quite easy and mostly “works”. There are only a few very specific, but quite common things to address, and mostly about write-write contention.

    Here are the slides:

    Log Buffer #456: A Carnival of the Vanities for DBAs

    $
    0
    0

    This Log Buffer Edition covers many aspects discussed this week in the realms of Oracle, SQL Server and MySQL.

    Oracle:

    • Oracle and Informatica have a very close working relationship and one of the recent results of this collaboration is the joint project done by Informatica and our Oracle ISV Engineering team to test the performance of Informatica software with Oracle Database 12c In-memory on Oracle SPARC systems.
    • The only thing you can do easily is be wrong, and that’s hardly worth the effort.
    • Enterprise Manager 13c: What’s New in Database Lifecycle Management.
    • SnoopEE is a very interesting grass roots open source Java EE ecosystem project. Akin to NetFlixOSS Eureka it enables microservices discovery, lookup and registration.
    • Docker is an open source container technology that became immensely popular in 2014. Docker itself is written in Google’s programming language “Go” and supported on all major Linux distributions (RedHat, CentOS, Oracle Linux, Ubuntu etc.).

    SQL Server:

    • This blog helps you understand Graphical Execution Plans in SQL Server.
    • New DAX functions in SQL Server 2016.
    • JSON support in SQL Server 2016.
    • PowerShell Tool Time: Building Help.
    • Datetime vs. Datetime2.

    MySQL:

    • Peter Gulutzan discusses SQL qualified names.
    • It is not new that we can store a JSON content in a normal table text field. This has always been the case in the past. But two key features were missing: filtering based on JSON content attributes and indexing of the JSON content.
    • MySQL 5.7 Multi-Source Replication – Automatically Combining Data From Multiple Databases Into One.
    • OOM killer vs. MySQL 5.7.10 epic results with over-allocating memory.
    • Apache Spark with Air ontime performance data.

     

    Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

    Viewing all 301 articles
    Browse latest View live