Wednesday, November 24, 2010

Some times you can have one's cake and eat it too

Finally a blog after 3 months..finally managed to overcome my laziness and finally found something interesting to blog. The past 1 month has been exciting with me joining a new company called fcs-inc.com and i am looking forward to doing some exciting things in my work place.

Now coming to the blog, if you are wondering what the logic behind this title is then read on...

More often than not, when we are stuck up with some bad performing SQL, one of the first things we do is to look for the plan generated by the optimizer.
Most of us would be wondering sometimes as to why this optimizer is not using Indexes and going for a FULL table scan and we keep scratching our heads. There are many reasons and we will discuss one of them here and how to overcome it.

Show time starts ......

Lets try creating a table called t.


SQL> drop table t;

Table dropped.

Elapsed: 00:00:02.20
SQL> create table t (x varchar2(10), y int);

Table created.

Elapsed: 00:00:00.17

Ok, so our simple 2 column table is created.

Now, lets create a simple index on column x of table t

SQL> create index idx_x on t(x);

Index created.

Lets insert some data now that our DDL s are done.


SQL> insert into t
  2  (select rownum, rownum+10 from dual connect by level < 1000000);

999999 rows created.

Elapsed: 00:00:19.29
SQL> commit;

Commit complete.

Elapsed: 00:00:00.03

Lets try to give optimizer the maximum amount of information by gathering the statistics.


SQL> exec dbms_stats.gather_table_stats('sh','T');

PL/SQL procedure successfully completed.

Elapsed: 00:00:11.03
SQL> exec dbms_stats.gather_index_stats('sh','IDX_X');

PL/SQL procedure successfully completed.

Here starts the real fun.

I have all distinct values for x in my table.

Lets try to run a small query and see the results.

Before that we will set the AUTOTRACE ON.

SQL> SET AUTOTRACE TRACEONLY


SQL> SELECT * FROM T WHERE X = 9999;

Elapsed: 00:00:00.34

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    11 |   517  (11)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |    11 |   517  (11)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_NUMBER("X")=9999)  ---- (Oracle is internally using TO_NUMBER here)

Ooops...this is a FULL TABLE SCAN. So to pick up one row out of 9,99,999 Oracle is scanning the entire table when there is a index that it could have used to generate the rows. This is definitely not correct.
But why did oracle choose the full table scan ??

A careful look at the query and the predicate information will reveal this.

SELECT * FROM T WHERE X = 9999

The data type of x is varchar2 and we are comparing a varchar2 column with a number.

Hence oracle has is not using the index.

Lets modify the query a bit and see the results.


SQL> select * from t where x = '9999';

Elapsed: 00:00:00.10

Execution Plan
----------------------------------------------------------
Plan hash value: 3460324958

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    11 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |    11 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IDX_X |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"='9999')

Bingo .... that is very sweet of Oracle to give us the expected plan.
The only difference was the quotes around the value '9999' .
Look at the change in the predicates also from the first and second queries
Since oracle in this case had to compare a varchar2 with a varchar2 , there was no need for an implicit conversion and hence the index was used.

Now, lets us improvise our query a bit.

Lets try to force Oracle to use the index 

SQL> select /*+ index(idx_x) */ x, y from t where x = 9999;

Elapsed: 00:00:00.31

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    11 |   517  (11)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |    11 |   517  (11)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_NUMBER("X")=9999)

No use in forcing Oracle to use the index as it cannot use it because of the TO_NUMBER applied in the predicates.

This is getting interesting now and lets see how the optimizer behaves when we use an explicit TO_NUMBER conversion.

SQL> SELECT * FROM T WHERE x = to_number('9999');

Elapsed: 00:00:00.32

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    11 |   517  (11)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |    11 |   517  (11)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_NUMBER("X")=9999)

The results are the same...expected

So, how do we over come this problem.

There are 2 solutions.

First is to create a FUNCTION BASED INDEX on the column that you would be applying the predicate on.

SQL> create index idx_fn_t_x on t(to_number(x));

Index created.

Elapsed: 00:00:02.50
SQL> select * from t where x = 999999;

Elapsed: 00:00:00.06

Execution Plan
----------------------------------------------------------
Plan hash value: 4214757644

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |  9984 |   107K|    13   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T          |  9984 |   107K|    13   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IDX_FN_T_X |  4000 |       |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access(TO_NUMBER("X")=999999)

That works. Our new function based index IDX_FN_T_X is in operation

The second solution is to change the apply the SQL function on the other side of the predicate.

SQL> SELECT * FROM T WHERE x = to_char('9999');

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 3460324958

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    11 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |    11 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IDX_X |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"='9999')

There you go...the second solution is far simple to implement and you have one less index overhead.
Also, one look at the costs of  both the plans and you will see that the second solution is better than the first one.

Now who says that you cannot always have one's cake and eat it too....Thanks to wiki for having explained this here http://en.wikipedia.org/wiki/Have_one's_cake_and_eat_it_too

More on types on Indexes in the upcoming posts...until then happy reading







LINK+IMAGE








Sunday, August 15, 2010

Save Points and Rollback

Recently i was asked a nice question by one of my team mates which ultimately led to this post. It was a question from one of the certification exams and the question was nice because as a developer one has to know the distinction between SAVEPOINT and ROLLBACK.
Now lets get back to some Basics of transaction management.
What is a save point ? Loosely speaking, it is a kind of Milestone in a transaction where you tell Oracle "Hey, Please remember these results as of this time in this transaction as i might need to come back to these results in future BEFORE I COMMIT ". I can have multiple SAVE POINTS in a transaction.
Lets see with an example :

We create a table called my_table and insert some rows into it .

sess1>drop table my_table;

Table dropped.

sess1>create table my_table (x int primary key, y int);

Table created.

sess1>insert into my_table values (1,11);

1 row created.

sess1>insert into my_table values (2,22);

1 row created.

Lets create a savepoint a

sess1>savepoint a;

Savepoint created.

sess1>select * from my_table;

X Y
---------- ----------
1 11
2 22

Lets insert some more rows into the table and create one more savepoint

sess1>insert into my_table values (3,33);

1 row created.

sess1>savepoint b;

Savepoint created.

Now that we have created two logical milestones, lets see the impact the savepoints have of selecting the tables.

sess1>select * from my_table;

X Y
---------- ----------
1 11
2 22
3 33

Lets rollback to the first savepoint a and see the results

sess1>rollback to savepoint a;

Rollback complete.

sess1>select * from my_table;

X Y
---------- ----------
1 11
2 22

We cannot see the newly inserted row (3,33) as we have rolled back to the logical savepoint a.

Lets see the affect of a complete rollback without savepoint

sess1>rollback;

Rollback complete.

sess1>select * from my_table;

no rows selected

So a rollback has completely rolled back all the transactions irrespective of the savepoints.

Armed with these basics, let try to solve the question that was asked.
The question was which all rows would be inserted into the table my_table when the procedure proc_my_table would be executed .


sess1>drop table my_table;

Table dropped.

sess1>create table my_table (x int primary key, y varchar2(5));

Table created.

sess1>create or replace procedure proc_my_table as
2 lv_name VARCHAR2(5);
3 lv_cnt number ;
4 BEGIN
5 INSERT INTO my_table VALUES ( 1, 'Illi' );
6 SAVEPOINT A;
7 INSERT INTO my_table VALUES ( 2, 'vijay' );
8 SAVEPOINT B;
9 INSERT INTO my_table VALUES ( 3, 'fiat' );
10 SAVEPOINT C;
11 BEGIN
12 ROLLBACK;
13 INSERT INTO my_table VALUES ( 3, 'fiat' );
14 EXCEPTION
15 WHEN dup_val_on_index THEN
16 ROLLBACK TO B;
17 BEGIN
18 INSERT INTO my_table VALUES ( 2, 'palio' );
19 INSERT INTO my_table VALUES ( 4, 'punto' );
20 INSERT INTO my_table VALUES ( 5, 'AVeryVerynicecar' );
21 SAVEPOINT D;
22 EXCEPTION
23 WHEN dup_val_on_index THEN
24 ROLLBACK TO A;
25 WHEN others THEN
26 ROLLBACK TO D;
27 END;
28 END;
29 INSERT INTO my_table VALUES ( 5, 'linea' );
30 COMMIT;
31 END;
32 /

Procedure created.

Lets execute the procedure
sess1>exec proc_my_table;

PL/SQL procedure successfully completed.

Now lets check the data in the table my_table

sess1>select * from my_table ;

X Y
------- -----
3 fiat
5 linea

Weird that only 2 rows were inserted.

Lets try to understand what exactly transpired when the proc was executed.
Until line 11, everything goes normal and there are 3 rows inserted

Its the line 12, that deleted the previously inserted 3 rows.
So until now the total no.of rows in the table was ZERO.

Line 13 inserts one row into the table.
So until now the total no.of rows in the table is ONE.

Since the previous insert was successful lines 14 - 28 will not be executed.

Next only the line 29 will be executed and committed in line 30.

Hence there are only 2 rows in the table my_table.


What would happen if we slightly modify the procedure and instead of ROLLBACK on line 19, ROLLBACK to a savepoint.

sess1>create or replace procedure proc_my_table as
2 lv_name VARCHAR2(5);
3 lv_cnt number ;
4 BEGIN
5 INSERT INTO my_table VALUES ( 1, 'Illi' );
6 SAVEPOINT A;
7 INSERT INTO my_table VALUES ( 2, 'vijay' );
8 SAVEPOINT B;
9 INSERT INTO my_table VALUES ( 3, 'fiat' );
10 SAVEPOINT C;
11 BEGIN
12 ROLLBACK to c; -- this line is changed
13 INSERT INTO my_table VALUES ( 3, 'fiat' );
14 EXCEPTION
15 WHEN dup_val_on_index THEN
16 ROLLBACK TO B;
17 BEGIN
18 INSERT INTO my_table VALUES ( 2, 'palio' );
19 INSERT INTO my_table VALUES ( 4, 'punto' );
20 INSERT INTO my_table VALUES ( 5, 'AVeryVerynicecar' );
21 SAVEPOINT D;
22 EXCEPTION
23 WHEN dup_val_on_index THEN
24 ROLLBACK TO A;
25 WHEN others THEN
26 ROLLBACK TO D;
27 END;
28 END;
29 INSERT INTO my_table VALUES ( 5, 'linea' );
30 COMMIT;
31 END;
32 /


Procedure created.

sess1>exec proc_my_table;

PL/SQL procedure successfully completed.

sess1>select * from my_table ;

X Y
---------- -----
1 Illi
5 linea

SO what happened in the changed proc?

The exception on line 14 was invoked and transcation rolled back to savepoint B.

NOw the internal Begin is kicked off and due to duplicate insert on line 18, the exception on line 22 is invoked and hence the transcation is rolled back to savepoint A.
So as of now there is only one row (1,'Illi') in the table my_table.
Next line 29 is invoked and the row (5,'linea') is inserted and committed.
Hope this post helps some way in understanding of the savepoints and rollback.

Illiyaz





http://9im6jjs425770o267f7u4gprd0l4tbn2.a.blogger.gmodules.com/gadgets/ifr?v=fffbf927a01c1573ec0acdcadfbfe574&container=blogger&view=editor-sidebar&lang=en&url=http%3A%2F%2Fwms.assoc-amazon.com%2FGoogleGadgets%2Famzn_monetize.xml&country=ALL&libs=core%3Adynamic-height%3Agoogle.blog%3Agoogle.blog.editor%3Alocked-domain%3Arpc%3Asetprefs%3Asettitle%3Aviews&parent=http%3A%2F%2Fwww.blogger.com%2F&mid=1278534008228#

Wednesday, August 4, 2010

Accessing Secure Database Environments using Port Tunneling

In my day-to-day work, I have found that many database developers are frustrated for not being able to connect to the database server on linux/unix environment from their windows desktop due to new security imperatives imposed by network managers by not opening the ports on the server to access except from the a specific environment ( Here linux development environment with ssh  server) .

These restrictions cause significant bottlenecks to productivity due to lack of GUI tool to use by developers while developing database code.

While database developers are technically literate, network services setup and firewalls are not a normal part of their knowledgebase.

This post describes the setup of a SSH tunneling to allow the connection from windows desktop to the database server environment (Linux/Unix ).

Here, I have installed ssh client on windows using CYGWIN+OPENSSH installation on your windows environment. We also use the Putty client as ssh client tool.

We need three things to work the below mentioned approach ( Most of the organisations have the this setup to access their databases with in LAN itself)

1. Windows Desktop (Development Desktop )

2. Linux/Unix Development Environment ( SSH server)

3. Oracle Database Server ( ORACLE Database Hosting Server)

Developer is able to connect the ORACLE database from Linux Development Environment . ( Database Host firewall permits access from linux Development BOX)

But when developer tries to connect the database using SQLPLUS or Sql Developer or any other client from Windows Desktop , the request fails with error "Connection Timeout Error".

Here is the sample TNS entry :
xdb=(DESCRIPTION= (ENABLE=BROKEN) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=databaseserver.test.com)(PORT=64187))) (CONNECT_DATA= (SID=xdb) (SERVER=DEDICATED)))

C:\Users\vireddy>sqlplus username@xdb

SQL*Plus: Release 11.2.0.1.0 Production on Wed Aug 4 04:44:09 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter password:
ERROR:
ORA-12170: TNS:Connect timeout occurred

Reason for the failure is Database Firewall is not permitting the windows desktop to establish the direct connection with database server port.

We can connect to the database server by having Tunneling in place using Putty or CYGWIN+OPENSSH installation on WINDOWS environment.

Open ssh client on windows environment and execute the command line

ssh -L 64187:databaseserver.test.com:64187 vireddy@linuxdevelopment.test.com -N

In the above command , we are forwarding the request on local port 64817 to database server port at 64817 using ssh server on linuxdevelopment box.

Now, we can are able to connect the Database from the Windows environment

xdb=(DESCRIPTION= (ENABLE=BROKEN) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=64187))) (CONNECT_DATA= (SID=xdb) (SERVER=DEDICATED)))

C:\Users\vireddy>sqlplus username@xdb

SQL*Plus: Release 11.2.0.1.0 Production on Wed Aug 4 04:45:58 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options


We can also add multiple port forwards using the command line as shown below


ssh -L 64187:databaserver.test.com:64187 -L 64150:Appserver.test.com:64150 vireddy@linuxdevelopemt.test.com -N

General command line format for using ssh client is

ssh -L localport:host:hostport user@ssh_server -N

where:
-L - port forwarding parameters (see below)
localport - local port (chose a port that is not in use by other service)
host - server that has the port (hostport) that you want to forward
hostport - remote port
-N - do not execute a remote command, (you will not have the shell, see below)
user - user that have ssh access to the ssh server (computer)
ssh_server - the ssh server that will be used for forwarding/tunneling

 +----------+<--port 22-->+----------+<--port 64817-->o-----------+
   |SSH Client|-------------|ssh_server|-------------|   Server    |
   +----------+             +----------+             o-----------+
  windows Box:64817           Linux Box              Database Server:64817

We can do the same setup using Putty client as mentioned below

Open the Putty tool and enter the details as mentioned below to connect the SSH server ( It's runing on linux development environment)





Enter the port forwarding as details as




Click Add button to add the port forwarding to be effective ( If we open another putty clent instance , we will be loosing the setting. We can persist it by going to Session tab and clicking on save button).

DDL s - Autocommit and Autonomous transactions

So finally convinced myself to overcome my laziness and write the second post ..and here you go...

Lets get started with understanding some discusson on DDL s and the affect that they can have on the uncommitted work in Oracle

Lets create a table in session1 called sess1
session1>create table sess1 (x int, y int);
Table created.

Now lets insert some data into the table
session1>insert into sess1 values (1,11);
1 row created.
session1>insert into sess1 values (2,22);
1 row created.
session1>insert into sess1 values (3,33);
1 row created.


session1>select * from sess1;
--------- ----------
         1         11
         2         22
         3         33

Lets create one more session and check for the data

session2>select * from sess1;

No rows selected

So far so good...Expected result...Excellent .
Lets commit in session1 and see the results
session1>commit;
Now lets check the data from session2
session2>select * from sess1;
---------- ----------
         1         11
         2         22
         3         33
Still expected behaviour from Oracle.
Lets insert some data into session1
session1>insert into sess1 values (4,44);

1 row created

Now do a DDL by trying to create a table in session1

session1> create table sess11 (x int, y intt);
 create table sess11 (x int, y intt)
                               *
ERROR at line 1:
ORA-00902: invalid datatype



OK, the create table failed because the data type was invalid


Lets go ahead and check the data in session2



session2>select * from sess1;


---------- ----------
         1         11
         2         22
         3         33
         4         44 --> newly inserted row in session1 


Oh, how are we seeing the new row (4,44) data in session2 even though we have not committed in session1. 
This is due to the fact that we tried doing a DDL in session1 and hence Oracle implicitly committed on our behalf and that was the reason that we were seeing the newly inserted data in session2.



   
So one has to be careful while doing DDL s so that they donot commit all the uncommitted work done in the session.

Lets see what happens when a successful DDL is run in session1.

session1>insert into sess1 values (5,55);

1 row created.


session1>create global temporary table gtt (x int, y int);


Table created.



session2>select * from gtt;


no rows selected


session2>select * from sess1;


         X          Y

---------- ----------
         1         11
         2         22
         3         33
         4         44
         5         55  --> newly inserted row


So the new table that we created and the new data (5,55) that we inserted in session1 are seen in session2. There need not be any explicit commit in session1 to see the newly created objects in session2. 

So why does Oracle commit on DDL ?


Because DDL must have been executed by Oracle (though the exact internal workings are not revealed) as

commit (implicit - part of DDL statement) ;
DDL statement
commit  
(implicit - part of DDL statement) ;

Since there must have been a commit before the DDL was executed, inspite of the failed DDL, we were still able to see the newly inserted data in the other session as in the first case and since there must have been a commit after the DDL, the newly created objects are seen in the second session as in the second case.


But why is this important to us developers :

Because not being able to commit /rollback at will is considered a bad transaction habit.

Many at times we keep working on a single session and keep inserting/deleting a lot of data from tables for our testing purposes and one DDL attempt will autocommit all the transactions and there is no ROLLBACK after that.


So how do we prevent DDL s affecting the un committed DML s.

Enter Autonomous Transactions :

Lets try to understand what is the difference in behaviour between Normal transaction and Autonomous transaction.


session1>insert into sess1 values (8,88) ;

1 row created.




session1>declare



  2  pragma autonomous_transaction;
  3  begin
  4  execute immediate 'create table sess11 as select * from sess1' ;
  5  end;
  6  /
PL/SQL procedure successfully completed.



session1>desc sess11;

 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 X                                                              NUMBER(38)
 Y                                                              NUMBER(38)

So the new table sess11 has been succcessfully created via the anonymous procedure.

Now lets check if the newly inserted data (8,88) is seen in session2 


session2>select  * from sess1 ;



         X          Y

---------- ----------
         1         11
         2         22
         3         33
         4         44
         5         55
         6         66
         7         77


Wow, not seen in the session2 . The difference lies in the declare section of the anonymous procedure 
pragma autonomous_transaction



So how does a Autonomous transaction work and what are the implications of it for us developers??
I will write more about it in my next post (hopefully i will do it soon)  and until then happy reading.


Illiyaz Mohammad


























Saturday, July 10, 2010

What is TNSNAMES.ora

This is the first posting on the blog and after a lot of thought process decided to write something on TNSNAMES.ora file.I am sure that most of us see this file almost everyday but never bother to understand what this means.Ok, so lets get started.......


TNS stands for Transparent Network Substrate.Whenever we want to connect to Oracle through any client say sqlplus we refer to the entries in this file by giving the Host String which should match the entry in the TNSNAMES.ora file. So what does this file contain? This file contains the details of the databases that can be connected to. So lets start with an example here:
One of the entry in my machine looks like this

1.    DB1111 =
 2.     (DESCRIPTION =
 3.   (ADDRESS = (PROTOCOL = TCP)(HOST = TESTSERVER)(PORT = 1521))
4.    (CONNECT_DATA =
 5.     (SERVER = DEDICATED)
  6.    (SERVICE_NAME = db1)
 7.   )
8.   )

Lets go through the entire entry now.Please note that i have marked the line numbers for clarity of explanation here. Lets start with line 3. " PROTOCOL = TCP " refers to the networking protocol .
" HOST = TESTSERVER " refers to the server on which the database is installed.
" PORT = 1521 " refers to the port on the server on which the listener is running.
A listener is a process on the data base which accepts new connections to the database.
Line 5 : " SERVER = DEDICATED " Mode of connection if it is DEDICATED or SHARED. There are two types of connections. Dedicated Server Mode and Shared Server Mode. More about this in the next post.
Line 6 : SERVICE_NAME = db1 Name of the database to which we intend to connect. It is also referred to as SID sometimes. SID refers to Site Identifier.

( Extended information : The service_name concept came available with TNS V3  with Oracle 8i. Sessions using the SID mechanism still use TNS V2 and it's only for backward compatibility . Using services is the way to configure tns connect alias starting from 8i. A service is a way to refer to an instance. Administering by means of services is a helpful topic when administering RAC environments. You can declare certain instances to belong to a service and other to another service name, this way you can better distribuite workloads during peak hours and off time, you can better schedule tasks and avoid bottlenecks by redirecting jobs to specific services and using features such as resource manager.)
Line 1 : This is the HOST STRING that we refer to while connecting.

So when we basically connect say by typing scott/tiger@DB1111 in the sqlplus window, our request gets translated to a request to connect to the the database DB1 which is running on TESTSERVER and the listener is accepting the connections on PORT 1521 on this server.

Can i connect to Oracle without TNSNAMES.ora on my local machine? Yes, of course you can by knowing the HOST, the PORT and the DATABASENAME.
Go to COMMAND PROMPT and type sqlplus "username"/"pwd"@"ENTIRE TNS ENTRY"
In my local machine here it is

C:\Documents and Settings\Smi>sqlplus scott/illiyaz@(description=(address=(proto
col=TCP)(Host=localhost)(PORT=1521))(connect_data=(server=dedicated)(service_nam
e=db1)))

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Jul 9 10:20:16 2010

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining Scoring Engine options

SQL>


With all this information, we are in  a position to solve ORA-12505 error ( TNS: listener could not resolve SID given in connection ).




More to come on the types of connections in the next post...Until then happy learning

Illiyaz