Arduino is open-source, but it doesn't fall into the typical open-source model in that it includes a hardware component in addition to a software element. Several different ones, in fact. What I have gotten my hands on is one of the more popular and powerful models, the Mega 2540, but that isn't important right now...
I wanted to just touch briefly on the possibilities and complexities of programming with Arduino. Suppose you'd like to use an Arduino board to power a prosthetic hand - how can that be done? I intend to explore this question in very deed, but here I'll just outline a couple of concepts.
INPUTS AND OUTPUTS
The Arduino board accepts a variety of low voltage inputs and has the ability to measure them incrementally. For example, an input might be "instructed" to measure an ON or OFF, HIGH or LOW input to trigger a switch, or the same input might receive a series of incremental values between a set of minimum and maximum values to control a variable speed motor.
If a sensor were properly placed on the amputee's wrist, or perhaps even an elbow or bicep, the sensor could detect muscular movements to generate voltages. These voltages could then be processed by the Arduino board to output a signal to a variable speed servo that controlled a prosthetic finger. Pressure sensors on the fingertips could then be used to send data back to the Arduino board to represent "grip pressure". This data can be further processed by Arduino to output a signal to a small device at the amputation point that would simulate the pressure directly to the wearer so that (s)he would be able to physically sense the grip pressure and make physical adjustments that would, in turn, be read and interpreted by Arduino to stop the squeezing process while still maintaining the grip.
These are the challenges that make programming not only fun, but rewarding. The idea creating something that would enable a child to receive a prosthetic arm that his own dad built for him with parts and components purchased online or at the local hardware store is really compelling. It's more than a dream. It's something that is currently on my plate. Progress reports will be forthcoming. It will be a slow process, but I am excited to get started!
Until then, happy computing!
It has, once again, been a long time since I have posted anything. It's time. In fact, it's long overdue.
Bloggers have personal lives. My personal life has included a change of employment, a couple of new business ventures, and most recently, the loss of a parent. It's been a very bitter-sweet year for me.
Last things first - Mom. I recently bid farewell to the rock of my life, the one person who has always been there to support me, cheer me, lecture me, help me, and above all, love me through all the ups and downs in my very up and down life. Her example of perseverence is one that I will never forget and hope to emulate in every aspect of my life.
Which takes us to new business ventures. Mom was never afraid to learn something new and when she decided to learning something, she never went half-way. With that example in the forefront of my mind, I am delving into a new realm of experience, namely X-Ray. RMF X-Ray Services in the San Antonio, TX area is building a growing and profitable business and RMF (initials of the owner) has invited me to become a part of his venture as a technician, web developer, database developer/administrator, and all-around assistant. It gives me a chance to grow and learn in a new arena and to associate with one of my favorite people in the world.
I am also expanding my business acumen into the Direct Sales arena with the new and exciting science of Nutrigenomics. My mom was a FIRM believer in this awesome science and she used it to extend her life and improve her quality of life while she was with us. She often said that she felt better in her 80's than she did in her 70's, largely due to this awesome science. She was honored in front of thousands in Orlando recently for her dedication to and love of LifeVantage. I will be sharing things in this blog about LifeVantage and the incredible opportunity it presents for better health and freedom.
Yes, I will continue to share code snippets, too. I strongly believe in the community that is built by sharing knowledge with others. It's primarily the way I learned, and I hope to continue to learn new and innovative ways to accomplish erstwhile mundane or time-consuming tasks. It is one of the primary reasons for Ent Web Musings.
Until next time...
This post takes us to the next phase in the Integrating with Excel from Access series – the FormatReportGeneric() function.
This relatively simple function does the following:
- Selects the current used range in the active spreadsheet.
- Creates a table from the selected range and names it according to the optional TableName argument. Note that this pre-supposes that the data to format begins at cell A1. A few extra lines can be added to account for table data in a location other than A1, but I’m keeping it simple in this post as it works fine as-is for most applications.
- Formats the table to one of the built-in table styles. I use TableStyleMedium4. You can pick the style that best suits your needs, or add some code to make it an optional argument.
- Sorts the table based on the optional argument SortField
- Aligns some fields. There are some fields that appear over and over again in a given application that need to be centered or maybe right-justified. Obviously, you’ll want to include your own fields, perhaps even create an array of common fields and loop through them.
- Converts date fields. Along the same lines as alignment, date fields do not always translate well to Excel, especially when the data derives from a linked SQL Server database. Dates are in a text format and must be converted. NumberFormat = "mm/dd/yyyy" takes care of that.
- Converts numbers stored as text. Numbers are often stored as text (especially when working with SQL Server or Oracle data sources) and have to be converted. This is a two step process:
- Change the number format to "General"
- Rewrite the value into the cell. This second step actually completes the conversion to a number.
- Auto-fit columns to the appropriate minimum width to accommodate the entries
- Reduce some columns – This is also totally optional. Sometimes there are columns (like Description) that have very long entries. You may define those here and reduce them to a manageable 50 px width.
- Unwrap the text in the entire table. Again, this is optional. It makes the table look uniform, and since we have already set the autofit, nothing gets cut off except maybe the longer fields like Description.
That’s it! We’ve got a nicely formatted Excel table that used to be just bland data as it was pasted in from Access. Obviously, there is a LOT more we could do (like setting up headers and footers, etc.,) but this is a quick and dirty overview on how to get started.
Here is the code:
Sub FormatReportGeneric(objXL As Excel.Application, Optional SortField As String = "", Optional TableName As String = "Table1")
On Error GoTo errFormatReport
' Select the data range – we are pre-supposing that the active
' sheet only contains data that we want to convert to a table
' If you want to start from a specified range instead of
' getting the entire sheet, try the following code to select
' just the table data beginning with the active cell:
If .Selection.Rows.Count = 1048574 Then
' Be sure to trap for this error somewhere – it means
' there was nothing on the sheet to format.
Err.Raise vbObjectError + 1001
.ActiveSheet.ListObjects.Add(xlSrcRange, .Selection, , xlYes).Name _
.ActiveSheet.ListObjects(TableName).TableStyle = "TableStyleMedium4"
If SortField <> "" Then
SortFields.Add Key:=.range(TableName & "[[#All],[" & SortField & "]]"), SortOn:= _
xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal
' Format columns (centering, date/time)
' Some columns may exist, others not. It's generic…
On Error Resume Next
.range(TableName & "[Qty]").HorizontalAlignment = xlCenter
.range(TableName & "[Qty]").NumberFormat = "General"
.range(TableName & "[Qty]").Value = .range(TableName & "[Qty]").Value
.range(TableName & "[Quantity]").HorizontalAlignment = xlCenter
.range(TableName & "[Quantity]"). NumberFormat = "General"
.range(TableName & "[TimeStamp]").NumberFormat = "mm/dd/yyyy"
.range(TableName & "[ReceiptTime]").NumberFormat = " [$-F400]h:mm:ss AM/PM"
.range(TableName & "[Description]").ColumnWidth = 50
.Selection.WrapText = False
' Include your favorite error trapping routine.
' Common errors include 1004 (an undetermined Excel error) and
' 91 (a problem with the Excel object being properly set). Don't forget
' to include error trapping for the vbObjectError + 1001 error (no data).
Arrays can be handy, and there are some excellent functions in VBA-Land to help us deal with arrays, but not so much with simple delimited lists. They must be first converted to arrays before they become useful in VBA.
A simple telephone number can be considered a delimited list. It consists of three parts delimited by a dash (-). Thus, the telephone number 800-555-1212 has the following parts:
- Area Code: 800
- Exchange: 555
- Number: 1212
If you want to get just the Exchange portion, you can use the Split() function to convert the phone number into a three-element array, then retrieve the second element. That’s not so hard – until you need to run some code to return only phone numbers from a list with a given Exchange. It’s a bit cumbersome to write the VBA code for such a situation, and next to impossible for a SQL query. For one thing, SQL doesn’t do arrays, making it supremely difficult to get a specific element without migraines and extra caffeine.
It can get even hairier if the format used includes multiple delimiters, such as 800/555-1212.
If you (like I do) need to perform such analyses on a regular basis, the GetElement() function may be of use to you. GetElement() essentially does nothing more than what was explained above – it converts the string to an array and returns a selected element. Here is the code:
Function GetElement(LineVal As String, WhatElement As Integer, Optional Delimiter = "|") As String
Dim astrParts As Variant
On Error GoTo errHandler
‘ Create an array from the input.
astrParts = Split(LineVal, Delimiter)
‘ Get the selected array element (zero-based)
GetElement = astrParts(WhatElement - 1)
[Put your favorite error handling code here]
With this simple function, you can write a query such as the following to retrieve the Exchange element from dash-delimited numbers (800-555-1212):
SELECT GetElement([Phone],2,"-") AS Exchange FROM Customers WHERE GetElement([Phone],2,"-");
Even with multiple delimiters, it is not particularly tough. Take 800/555-1212. The first element is the Area Code (800), delimited by a forward slash. The first element of the second element (555-1212) is the Exchange (555), delimited by a dash. So we modify the query as follows:
SELECT GetElement(GetElement([Phone],2,”/”),1,”-“) AS Exchange FROM Customers WHERE GetElement(GetElement([Phone],2,”/”),1,”-“) = 555
I use this in Excel, as well, in my Personal Macro Workbook.
This post is a break from the recent code snippet entries to relate something I have too much experience to not know, but didn't know anyway.
I have been developing MS Access databases for nearly two decades, now, but I have rarely had occasion to have a Primary Key that consisted of multiple fields, one of which might consistently be null or blank. My PKs have typically been very specific, so the blank field in a PK is uncommon, to say the least. Not unheard-of, but definitely uncommon.
First of all, SQL Server does not like Nulls in a Primary Key. MS Access doesn't care, but when build tables in SQL Server that you will link to MS Access, and one of them needs to be able to hold a blank, you'll have to make it an empty string, instead of a Null.
And now to the point of this blog: DO NOT RELY ON DEFAULT VALUES IN A PRIMARY KEY!
I discovered that you cannot rely on default values in SQL Server to supply a value for the Primary Key, especially if you plan to link the table via ODBC to MS Access.
Weirdnesses begin to happen at that point. Not good ones. Bad. Very bad.
Briefly - I had a table with a 5-field PK, and one of those fields is consistently blank. I had a default value in the SQL Server definition of an empty string for the field in question and everything seemed peachy until the users complained that when they entered in a new row, the data changed - visibly changed to something completely different - when they committed the new row. It fixed itself with a refresh, but it was seriously causing confusion. If they edited the row to "fix" it - change it back to what they'd just entered - they discovered that it was actually changing some other row in the database! Worse, if they deleted it because it was suddenly wrong, it would delete the other row!!
Holy sheep dip!!! This was NOT a good weirdness! It was actually displaying data from some other seemingly random row in the database.
After a significant amount of research, I discovered that entering in a value - either manually or programmatically - resolved the issue. Allowing the default value to derive from the table definition is not good.
In SQL Server, when entering data manually in edit mode WITHOUT entering a value in EVERY key field (relying on default values), you'll get an exclamation point in the record selector with a control tip that states the following:
This row was successfully committed to the database. However, a problem occurred when attempting to retrieve the data back after the commit. Because of this, the displayed data in this row is read-only. To fix this problem, please re-run the query.
This does not seem like a significant issue until you try adding a new record in MS Access. THAT is when the weirdness begins and data starts to go wiggy.
It is imperative that you physically (or programmatically) supply a value for every element of the Primary Key.