as we all know, visual foxpro provides an extremely rich and varied development environment but sometimes too much of a good thing leads us into bad habits.  when writing code there are usually several ways of achieving the same result, but all too often there are significant differences in performance and the only way to ensure that our code is optimized for the best performance is to test, test again and re-test under as many different conditions as can be devised. having said that, it is equally important to recognize that the first requirement of any code is that it is functionally correct. as marcia akins (microsoft mvp, author and co-owner of tightline computers inc) has been known to say  doing something wrong as fast as possible is not really very helpful”.

but too often we tend to treat getting the correct functionality as the end of the story and, once something is working, simply move on to the next issue. the reality is that most developers typically review and optimize their code at the same time as they go back to add the comments (i.e. never)!

however, by applying some basic rules and techniques you can ensure that you avoid some of the more common problems and produce better and more efficient code the first time around. the more frequently you can do that, the less time you will need to spend revisiting and ‘tuning’ functional code. this has two quite separate benefits for any developer:

·         the less tweaking of code you have to do once it is working correctly, the less chance there is of introducing bugs into functional code

·         getting it right immediately saves time, not having to revisit code is always quicker than re-factoring to improve performance or usability

the purpose of this series of articles is to review some of the things that we can do, as we are writing code, to ensure that our software is as efficient and as usable as possible and to minimize the need to re-visit working code to tweak it. we’ll begin with some of the basics and get a little more advanced in the later articles in this series.

warn your users, but don’t treat them like idiots

someone (sorry, but i don’t remember who) once remarked to the effect that the only two things end-users want from their software is that it won’t make them look silly in front of the boss by giving them the wrong answers, and that it won’t treat them like idiots. unfortunately we, as developers, tend to concentrate so much on the first, that we forget about the second. yet one of the most basic things that we can do in our applications is to try and strike the proper balance between providing relevant warnings and ‘nagging’.

one of my personal pet hates in this area comes from vfp itself. have you ever noticed that when you are stepping through code in the debugger and hit “fix” you get an immediate dialog that says “cancel program?”. i understand that the intention here is to warn me in case, say, i inadvertently opened the drop down and chose “fix” option when i really wanted some other option (an i really that dumb?). but in this dialog the ‘default’ option is “yes” which, while is not really consistent with the reason for displaying the dialog in the first place (i.e. to ‘fail safe’). still, you can argue that it makes sense because the chances really are that if i chose “fix” i do want to fix the code.

however, if the code in question is a class definition, choosing ‘fix’ is no longer sufficient because as soon as you try to edit in the opened code window you get another dialog – and this time it asks:

“remove classes from memory?”

now hang on a moment, we have already told vfp that:

[1] we want to fix the code that is running

[2] yes, we really do want to cancel the running program

and now it asks if we want to remove the class? how are we supposed to fix it if we don’t? to make matters worse, the selected default option is “ignore”!

so if you happen to have tried to insert a new line by pressing the enter key as the first step in your edit (and how often is that not the first thing you want to do?) – this idiot dialog flashes on your screen, and goes away, selecting “ignore” and nothing happens. now look, i am, after all, a developer and surely if i am attempting to edit a class definition i actually want to do it? who does vfp think it is to assume that i don’t know what i am doing?  this is really annoying, not to say insulting!

now consider how often in your own applications you have dialogs that nag the user like this? the classic is the “are you sure?” question. here’s the scenario; the user opens the search screen, does a locate for some value and fetches a record. they then select, from your options, “delete”. a dialog box pops up saying “this will delete this record, are you sure?” with “no” as the default option (it’s “fail safe” time, folks…). how insulting is that?  of course they want to delete the record, they just spent 20 minutes finding the darn record, and now you ask them if they are sure this is what they meant to do?

of course, i hear you say, there is always the possibility that they hit delete by accident. but whose fault is that? answer, yours!  you are the one who made it possible to hit ‘delete’ by accident, no-one else. if the delete functionality is so sensitive, then the user interface is wrong to make it so casually available. (do you ask “are you sure?” when they want to add a record, or save changes….?).

why not make enabling the “delete” button a positive action so that the user has to do something to initiate the process and does not then have to deal with the “this will delete a record” followed by “are you sure?”,  followed by “are you really, really sure” and so on ad infinitum. at the end of the day you, the developer,  have to either execute the delete command or cancel the operation – better to warn them, and give them the chance to cancel, before they have invested their time in the process.

inform your users, but don’t compromise performance to do so

here is some code that i came across in a real-life application recently. the application in question was one that was written some time ago and for which data volumes had grown considerably over the years. the code in question is fairly common, and simply updates a wait window with a message indicating the progress of an operation that was running on every record in a table. here is the relevant part of the scan loop:

*!* initialize record progress counter

lncnt = 0

lcofrex = " of " + transform( reccount( alias() ) )

scan

  *!* update progress display

  lncnt = lncnt + 1

  lctxt = 'processing record ' + transform( lncnt ) + lcofrex

  wait lctxt window nowait

now the interesting thing about this process was that it was running against a table that now more than contained 125,000 records. so what? i hear you say, well the time taken to execute the process was about 3 minutes. but try this code on your local machine:

local lncnt, lcofrex, lnst, lnnum, lctxt, lnen

lncnt = 0

lcofrex = " of 125000"

lnst = seconds()

for lnnum = 1 to 125000

  lncnt = lncnt + 1

  lctxt = 'processing record ' + transform( lncnt ) + lcofrex

  wait lctxt window nowait

next

lnen = seconds()

? str( lnen - lnst, 8, 4 )

now, on my pc this code took just over 32 seconds to run and what does it do? nothing at all! the screen display is not even readable. the only conclusion that could be drawn was that just this little bit of, utterly useless, code was taking more than 15% of the total run time. try the following version of the same code:

local lncnt, lcofrex, lnst, lnnum, lctxt, lnen

lncnt = 0

lcofrex = " of 125000"

lnst = seconds()

for lnnum = 1 to 125000

  lncnt = lncnt + 1

  if mod( lncnt, 10000 ) = 0

    lctxt = 'processing record ' + transform( lncnt ) + lcofrex

    wait lctxt window nowait

  endif

next

lnen = seconds()

? str( lnen - lnst, 8, 4 )

this runs, on my machine, in less than 0.3 of a second – that is more than 100 times faster! now if we consider the actual process in question, that was dealing with 125000 records in about 3 minutes, that meant it is running at about 700 records per second. can the user even see a screen updating at that rate, let alone derive any useful benefit from it? of course not, so why do it?

the question that this is all leading up to is, therefore;

what is a reasonable interval at which to update the screen?

unfortunately there is no ‘right’ answer, but i would suggest that you can apply some common sense. the first requirement is that you need to have some idea about the total length of the process in question. obviously if the process runs for three hours, updating every ten seconds is probably unnecessary, conversely if it takes three minutes, then a ten-second update interval seems reasonable.

the general rule that thumb i use is to try and update my user information display 200 times per process (i.e. every 0.5% step completion). my progress bar therefore has 200 units and i set my update interval by calculating the expected progress that constitutes 0.5% of the total by getting the number of records, and the average time to process each.

how do i know the average time? from testing!

when i am developing the code, i test it. and i base my assessment of the average processing time on a test that uses a volume of data that is at least 50% larger than i expect to see in production. yes, this sometimes means that my progress updates are too fast when the system first goes into use, but as data volumes grow, the display rate typically gets closer to my target 0.5% completion. even if i was way off in my estimate, and the process ends up taking twice as long, per record, as i expected i am still updating the display every 1% of way – which in a three-hour process would mean the screen gets updated every 100 seconds or so.

this may all sound very simple and obvious, but as so often in application development, it is the little things that make the difference – especially when they are obvious to the end user.

 

8 Responses to Writing better code (Part 1)

  • Andy,

    Excellent post. These are things I totally agree with. My personal framework does the "Are you sure?" message by default when deleting, but there is an option to never show it again…and a way to turn it back on in the Options dialog. This way, the user determines how the want the application to behave, not me.

  • This brings back memories of something Jim Booth  talked about on the AppDev training CDs. Something to the effect that it’s really annoying when an application presents a message box stating something like:  "This program just died.. yada yada yada",  and the caption for the only command button in the message box is "Ok".

  • Jamie Osborn says:

    Great post.

    So many questions in forums and blog posts/articles are about how to get VFP to achieve some specific outcome ("How to I get a report to conditionally print bold… yada yada") and not nearly enough are about design.

    You can pretty much always find a way to get VFP to solve your specific problem but as to designing the solution well – that is another story.

  • Good stuff, Andy. I think it’s Alan Cooper you’re paraphrasing at the top.

    I’ve been thinking about the first issue a lot as I prepare my "Best Practices in User Interfaces" session for GLGDW. There’s another point besides annoying the user here. If you always show the confirmation dialog, then hitting the right keystroke becomes an automatic action for the user and they don’t even see it when it really matters. Think about how you delete files in Explorer: highlight, Del, Enter (to dismiss the "Are you sure" dialog). I don’t even think before I hit Enter.

    One other consideration. If you don’t confirm deletion and other data-changing actions, you should implement a robust Undo facility, so the user has a way out when he does screw up.

  • andykr says:

    Thanks to everyone for the kind words, I will be following up with other stuff over the next few months with other musings on this, and related topics. Your time and thoughts are always appreciated, thank you.

  • Fernando D. Bozzo says:

    Very usefull point. I’ve just one comment for the last one, about maintaining the user informed: Users (we are users too) make nervous when they don’t know what is happening. You say "in a three-hour process would mean the screen gets updated every 100 seconds or so"…. that’s too much, if you look to the screen to see the progress of the work, waiting more than 3 secs makes you nervous, don’t imagine waiting 100 secs to see something moving! We can say that the user must be informed about that progress for every 3 secs aprox.

  • andykr says:

    Fernando,

    Actually what I said is that there is "no right answer" to how often to update the screen and that my PERSONAL rule is to do it every 0.5% of the run time. If you want to do it more often, that’s fine by me (though are you seriously telling me that given a three-hour process a person is going to be checking the screen every 3 seconds)? I would expect that most people would start that sort of process running and then go out to lunch for at least two hours….

    My point is that the information has to be relevant to what the user is doing and that too much is at least as bad as too little.

  • Bill Coupe says:

    Relevant is indeed the issue Andy.  Unfortunately, what’s relevant to me, may not be to you.  I’ve been experimenting with a user configurable response/refresh idea where the user can specify how often they’d like to be notified.

    I recently had a very large process, that ran against a remote DB2 database and would update about 5-7 records a second if I updated the screen with every write, but ran about twice as fast if I updated only every 10 records…

    As this file was nearly 250,000 records updating 10 records a second instead of 5, makes a huge difference, in throughput, but the approximately 1 second update to the screen made little difference in watching progress.

    Good post as this is one of those areas that’s extremely difficult to strike a ‘universal balance’ with.

Leave a Reply

Your email address will not be published. Required fields are marked *