Using cURL to Fetch RETS Data.
Firstly, for those unaware of cURL, here's the definition from their homepage: curl is a command line tool for transferring files with URL syntax. It is such a versatile tool that the underlying library, libCURL is used by many network enabled tools, including libRETS, which I'll discuss at another time. For the purposes of this discussion, I'll assume you are familiar with running command line tools on your platform of choice, and that you are at least somewhat familiar with the RETS protocols. My examples will be from a FreeBSD machine, but they should be pretty much usable on your favorite machine. That said, let's get to it.
This example will demonstrate fetching the raw RETS data. The data is in no way interpreted nor formatted as a tool like libRETS would do. So, this is just a quick and dirty means to fetch the raw feed. Parsing the data is an exercise left to the reader.
We will use the demo RETS server at
Since we have no idea what the metadata looks like at that site, let's first grab it. We know that RETS uses digest authentication and that the server may require a user agent to be identified. We will use
The first step is to login to the server. This will provide the authentication by the RETS server, and will return some state information in the form of headers and cookies. We must preserve these for subsequent calls, or the process will fail. Here is the login:
curl --digest \ --user-agent "MyCurlClient/1.0" \ -o /tmp/login.xml \ --show-error \ --dump-header /tmp/headers.txt \ -u "Joe:Schmoe" \ --header "RETS-Version: RETS/1.7.2" \ --cookie-jar /tmp/cookies.txt \ "http://demo.crt.realtors.org:6103/rets/login"
This will leave three files in the
headers.txt which contain the HTML headers,
cookies.txt containing the cookies, and
login.xml which contains the raw RETS output for the login transaction. The contents of
login.xml need to be processed to determine whether or not the login succeeded and to fetch the URLs to be used for the
search transactions. Again, I'll leave that as an exercise for the reader.
With a successful login, we should now be able to fetch the metadata. That can be done with the following:
curl \ --digest \ --user-agent "MyCurlClient/1.0" \ -o /tmp/metadata.xml \ --show-error \ --dump-header /tmp/headers.txt \ -u "Joe:Schmoe" \ --header "RETS-Version: RETS/1.7.2" \ --cookie-jar /tmp/cookies.txt \ --cookie /tmp/cookies.txt \ --data Type=METADATA-SYSTEM \ --data ID=* \ --data Format=COMPACT \ "http://demo.crt.realtors.org:6103/rets/getMetadata"
After examining the metadata, we see that the
Property resource has a class called
RES and table (field) called
ListPrice. Let's use that to search for all properties that have a ListPrice less than $300,000:
curl \ --digest \ --user-agent "MyCurlClient/1.0" \ -o "/tmp/search.xml" \ --show-error \ --dump-header /tmp/headers.txt \ -u "Joe:Schmoe" \ --header "RETS-Version: RETS/1.7.2" \ --cookie-jar /tmp/cookies.txt \ --cookie /tmp/cookies.txt \ --data Format=COMPACT \ --data SearchType=Property \ --data Class=RES \ --data StandardNames=0 \ --data QueryType=DMQL2 \ --data Query="(ListPrice=300000-)" \ "http://demo.crt.realtors.org:6103/rets/search"
You should now have the file
/tmp/search.xml with the results of this search.
As you see, cURL is a powerful tool. I use it relentlessly when trying to prove whether or not a particular RETS server is returning what I would expect. Since it operates at such a low level, I can eliminate all client artifacts when trying to debug a problem.